Feb 19 08:00:40 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 08:00:40 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:40 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 08:00:41 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 08:00:43 crc kubenswrapper[5023]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.267685 5023 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271408 5023 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271431 5023 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271437 5023 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271445 5023 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271451 5023 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271457 5023 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271462 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271467 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271472 5023 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271477 5023 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271481 5023 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271486 5023 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271490 5023 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271495 5023 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271500 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271505 5023 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271510 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271515 5023 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271519 5023 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271524 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271528 5023 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271533 5023 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271537 5023 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271542 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271546 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271552 5023 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271557 5023 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271562 5023 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271574 5023 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271580 5023 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271585 5023 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271589 5023 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271596 5023 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271602 5023 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271609 5023 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271614 5023 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271640 5023 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271646 5023 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271651 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271655 5023 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271660 5023 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271664 5023 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271668 5023 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271673 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271677 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271682 5023 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271686 5023 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271691 5023 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271695 5023 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271700 5023 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271704 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271710 5023 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271716 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271720 5023 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271725 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271729 5023 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271734 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271739 5023 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271744 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271748 5023 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271752 5023 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271757 5023 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271761 5023 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271773 5023 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271778 5023 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271782 5023 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271786 5023 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271791 5023 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271795 5023 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271800 5023 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.271804 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272660 5023 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272676 5023 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272687 5023 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272694 5023 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272702 5023 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272708 5023 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272715 5023 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272721 5023 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272727 5023 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272732 5023 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272738 5023 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272743 5023 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272749 5023 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272754 5023 flags.go:64] FLAG: --cgroup-root="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272759 5023 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272764 5023 flags.go:64] FLAG: --client-ca-file="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272769 5023 flags.go:64] FLAG: --cloud-config="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272775 5023 flags.go:64] FLAG: --cloud-provider="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272780 5023 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272787 5023 flags.go:64] FLAG: --cluster-domain="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272792 5023 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272798 5023 flags.go:64] FLAG: --config-dir="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272803 5023 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272809 5023 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272816 5023 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272822 5023 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272828 5023 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272833 5023 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272838 5023 flags.go:64] FLAG: --contention-profiling="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272843 5023 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272848 5023 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272853 5023 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272857 5023 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272864 5023 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272868 5023 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272873 5023 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272878 5023 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272883 5023 flags.go:64] FLAG: --enable-server="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272887 5023 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272894 5023 flags.go:64] FLAG: --event-burst="100" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272900 5023 flags.go:64] FLAG: --event-qps="50" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272905 5023 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272910 5023 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272916 5023 flags.go:64] FLAG: --eviction-hard="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272922 5023 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272927 5023 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272932 5023 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272937 5023 flags.go:64] FLAG: --eviction-soft="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272942 5023 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272946 5023 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272951 5023 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272956 5023 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272961 5023 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272967 5023 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272972 5023 flags.go:64] FLAG: --feature-gates="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272977 5023 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272983 5023 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272989 5023 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.272995 5023 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273000 5023 flags.go:64] FLAG: --healthz-port="10248" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273005 5023 flags.go:64] FLAG: --help="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273010 5023 flags.go:64] FLAG: --hostname-override="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273015 5023 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273021 5023 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273027 5023 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273033 5023 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273038 5023 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273043 5023 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273048 5023 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273053 5023 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273057 5023 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273094 5023 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273101 5023 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273106 5023 flags.go:64] FLAG: --kube-reserved="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273111 5023 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273116 5023 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273122 5023 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273127 5023 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273133 5023 flags.go:64] FLAG: --lock-file="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273138 5023 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273143 5023 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273148 5023 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273157 5023 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273162 5023 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273167 5023 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273173 5023 flags.go:64] FLAG: --logging-format="text" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273178 5023 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273184 5023 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273221 5023 flags.go:64] FLAG: --manifest-url="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273230 5023 flags.go:64] FLAG: --manifest-url-header="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273238 5023 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273244 5023 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273251 5023 flags.go:64] FLAG: --max-pods="110" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273257 5023 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273263 5023 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273269 5023 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273274 5023 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273280 5023 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273287 5023 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273292 5023 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273305 5023 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273311 5023 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273317 5023 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273324 5023 flags.go:64] FLAG: --pod-cidr="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273329 5023 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273337 5023 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273342 5023 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273354 5023 flags.go:64] FLAG: --pods-per-core="0" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273360 5023 flags.go:64] FLAG: --port="10250" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273365 5023 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273370 5023 flags.go:64] FLAG: --provider-id="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273376 5023 flags.go:64] FLAG: --qos-reserved="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273381 5023 flags.go:64] FLAG: --read-only-port="10255" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273386 5023 flags.go:64] FLAG: --register-node="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273391 5023 flags.go:64] FLAG: --register-schedulable="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.273396 5023 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274275 5023 flags.go:64] FLAG: --registry-burst="10" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274283 5023 flags.go:64] FLAG: --registry-qps="5" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274289 5023 flags.go:64] FLAG: --reserved-cpus="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274294 5023 flags.go:64] FLAG: --reserved-memory="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274301 5023 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274307 5023 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274313 5023 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274318 5023 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274323 5023 flags.go:64] FLAG: --runonce="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274333 5023 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274339 5023 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274345 5023 flags.go:64] FLAG: --seccomp-default="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274350 5023 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274355 5023 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274360 5023 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274366 5023 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274371 5023 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274377 5023 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274382 5023 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274387 5023 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274392 5023 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274397 5023 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274402 5023 flags.go:64] FLAG: --system-cgroups="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274408 5023 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274417 5023 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274422 5023 flags.go:64] FLAG: --tls-cert-file="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274428 5023 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274435 5023 flags.go:64] FLAG: --tls-min-version="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274441 5023 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274446 5023 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274451 5023 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274456 5023 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274461 5023 flags.go:64] FLAG: --v="2" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274469 5023 flags.go:64] FLAG: --version="false" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274476 5023 flags.go:64] FLAG: --vmodule="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274482 5023 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274487 5023 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274633 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274640 5023 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274646 5023 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274650 5023 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274656 5023 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274660 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274665 5023 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274670 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274675 5023 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274679 5023 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274684 5023 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274688 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274693 5023 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274698 5023 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274702 5023 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274706 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274711 5023 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274715 5023 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274721 5023 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274726 5023 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274730 5023 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274735 5023 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274739 5023 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274743 5023 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274748 5023 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274752 5023 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274756 5023 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274761 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274765 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274769 5023 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274774 5023 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274778 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274785 5023 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274791 5023 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274795 5023 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274800 5023 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274804 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274810 5023 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274816 5023 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274821 5023 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274826 5023 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274830 5023 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274835 5023 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274839 5023 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274845 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274852 5023 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274858 5023 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274863 5023 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274868 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274873 5023 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274878 5023 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274883 5023 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274890 5023 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274894 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274899 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274903 5023 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274908 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274912 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274917 5023 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274921 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274925 5023 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274930 5023 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274934 5023 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274939 5023 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274943 5023 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274948 5023 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274952 5023 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274958 5023 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274963 5023 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274968 5023 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.274972 5023 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.274986 5023 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.283786 5023 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.283858 5023 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.283972 5023 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.283991 5023 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.283998 5023 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284007 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284015 5023 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284022 5023 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284028 5023 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284034 5023 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284039 5023 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284044 5023 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284049 5023 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284056 5023 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284061 5023 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284067 5023 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284072 5023 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284077 5023 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284083 5023 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284090 5023 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284098 5023 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284104 5023 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284111 5023 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284124 5023 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284132 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284140 5023 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284147 5023 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284155 5023 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284163 5023 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284170 5023 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284177 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284183 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284188 5023 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284195 5023 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284202 5023 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284209 5023 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284215 5023 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284221 5023 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284226 5023 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284232 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284237 5023 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284243 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284248 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284253 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284259 5023 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284264 5023 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284270 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284275 5023 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284280 5023 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284286 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284291 5023 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284297 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284303 5023 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284308 5023 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284314 5023 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284319 5023 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284327 5023 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284335 5023 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284340 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284352 5023 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284357 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284363 5023 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284369 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284374 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284379 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284384 5023 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284389 5023 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284394 5023 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284399 5023 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284404 5023 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284410 5023 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284415 5023 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284420 5023 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.284430 5023 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284602 5023 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284612 5023 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284636 5023 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284642 5023 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284649 5023 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284658 5023 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284665 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284671 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284677 5023 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284682 5023 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284709 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284716 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284722 5023 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284728 5023 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284736 5023 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284741 5023 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284747 5023 feature_gate.go:330] unrecognized feature gate: Example Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284752 5023 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284758 5023 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284763 5023 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284768 5023 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284776 5023 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284783 5023 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284789 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284796 5023 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284801 5023 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284807 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284813 5023 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284819 5023 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284825 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284830 5023 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284835 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284841 5023 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284846 5023 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284852 5023 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284857 5023 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284862 5023 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284867 5023 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284872 5023 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284878 5023 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284883 5023 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284888 5023 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284894 5023 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284899 5023 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284905 5023 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284910 5023 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284916 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284921 5023 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284926 5023 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284931 5023 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284939 5023 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284945 5023 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284950 5023 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284958 5023 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284964 5023 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284972 5023 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284978 5023 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284993 5023 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.284999 5023 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285005 5023 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285010 5023 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285016 5023 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285021 5023 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285026 5023 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285034 5023 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285041 5023 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285047 5023 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285053 5023 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285059 5023 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285064 5023 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.285070 5023 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.285078 5023 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.285370 5023 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.290342 5023 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.290476 5023 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.292399 5023 server.go:997] "Starting client certificate rotation" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.292464 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.293579 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-09 16:45:44.350407256 +0000 UTC Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.293747 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.318381 5023 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.320503 5023 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.321878 5023 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.338681 5023 log.go:25] "Validated CRI v1 runtime API" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.377440 5023 log.go:25] "Validated CRI v1 image API" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.379016 5023 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.384137 5023 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-07-57-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.384168 5023 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.398534 5023 manager.go:217] Machine: {Timestamp:2026-02-19 08:00:43.395720189 +0000 UTC m=+1.052839157 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5e5c6cee-d6a5-40a2-be59-600505972de8 BootID:d46b7364-9350-4121-8387-6107f6e4f229 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ca:bd:8a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ca:bd:8a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:93:8e:54 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6a:fb:e8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7f:88:ab Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:50:b5:c6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f2:b8:f9:66:b2:e9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:13:50:6f:8b:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.398858 5023 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.399061 5023 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.401426 5023 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.401662 5023 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.401701 5023 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.401923 5023 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.401933 5023 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.402479 5023 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.402510 5023 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.402774 5023 state_mem.go:36] "Initialized new in-memory state store" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.402856 5023 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.406520 5023 kubelet.go:418] "Attempting to sync node with API server" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.406548 5023 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.406569 5023 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.406581 5023 kubelet.go:324] "Adding apiserver pod source" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.406596 5023 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.410332 5023 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.410988 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.411046 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.411112 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.411209 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.412705 5023 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.415423 5023 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416833 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416857 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416864 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416870 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416882 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416889 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416895 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416906 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416913 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416920 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416940 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.416948 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.417650 5023 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.418117 5023 server.go:1280] "Started kubelet" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.419038 5023 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.419093 5023 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.419668 5023 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.419915 5023 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:43 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.420709 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.420739 5023 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.420795 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:16:23.554627806 +0000 UTC Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.421255 5023 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.421332 5023 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.421650 5023 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.421391 5023 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.422313 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.422483 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.423133 5023 server.go:460] "Adding debug handlers to kubelet server" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.423485 5023 factory.go:55] Registering systemd factory Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.423508 5023 factory.go:221] Registration of the systemd container factory successfully Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.423911 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.424313 5023 factory.go:153] Registering CRI-O factory Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.424572 5023 factory.go:221] Registration of the crio container factory successfully Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.425018 5023 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.425330 5023 factory.go:103] Registering Raw factory Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.425607 5023 manager.go:1196] Started watching for new ooms in manager Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.427394 5023 manager.go:319] Starting recovery of all containers Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.426550 5023 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189596fd5fc97ace default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:00:43.41809019 +0000 UTC m=+1.075209138,LastTimestamp:2026-02-19 08:00:43.41809019 +0000 UTC m=+1.075209138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428777 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428825 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428839 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428849 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428859 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428870 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428880 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428891 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428905 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428917 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428929 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428939 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428950 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428963 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428974 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.428993 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429004 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429014 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429027 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429038 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429048 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429060 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429071 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429085 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429097 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429110 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429140 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429159 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429174 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429189 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429205 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429219 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429232 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429246 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429260 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429273 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429288 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429301 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429315 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429328 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429342 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429356 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429369 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429383 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429396 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429409 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429425 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429440 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429453 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429465 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429476 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.429488 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433597 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433655 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433676 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433725 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433739 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433753 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433764 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433779 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433808 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433820 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433833 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433844 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433854 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433885 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433898 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433911 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433923 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433935 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433968 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433978 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.433991 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434000 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434011 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434042 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434053 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434066 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434075 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434085 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434116 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434127 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434139 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434148 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434159 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434188 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434200 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434216 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434231 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434243 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434286 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434296 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434310 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434320 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434329 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434362 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434375 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434388 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434399 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434409 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434442 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434453 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434468 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434486 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434541 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434559 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434607 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434655 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434674 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.434716 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.439891 5023 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.439971 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440024 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440044 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440065 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440100 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440120 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440152 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440185 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440206 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440220 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440235 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440281 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440297 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440310 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440321 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440890 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440914 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440928 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440963 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440975 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440986 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.440998 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441011 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441022 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441035 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441045 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441056 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441068 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441079 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441088 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441100 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441111 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441122 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441132 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441142 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441170 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441183 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441194 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441204 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441215 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441227 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441237 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441249 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441261 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441271 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441282 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441293 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441309 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441320 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441331 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441341 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441351 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441361 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441372 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441382 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441393 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441404 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441415 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441425 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441446 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441461 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441472 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441485 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441497 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441510 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441522 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441635 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441650 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441661 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441671 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441682 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441692 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441702 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441715 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441733 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441745 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441759 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441769 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441779 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441790 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441804 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441814 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441824 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441835 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441844 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441857 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441867 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441877 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441888 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441897 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441909 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441921 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441938 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441949 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441968 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441982 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.441998 5023 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.442013 5023 reconstruct.go:97] "Volume reconstruction finished" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.442023 5023 reconciler.go:26] "Reconciler: start to sync state" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.454754 5023 manager.go:324] Recovery completed Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.463531 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.467451 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.467653 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.467851 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.469022 5023 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.469044 5023 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.469062 5023 state_mem.go:36] "Initialized new in-memory state store" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.472749 5023 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.475501 5023 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.475558 5023 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.475591 5023 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.475659 5023 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.476349 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.476468 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.484864 5023 policy_none.go:49] "None policy: Start" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.485589 5023 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.485637 5023 state_mem.go:35] "Initializing new in-memory state store" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.521740 5023 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.524056 5023 manager.go:334] "Starting Device Plugin manager" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.524170 5023 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.524187 5023 server.go:79] "Starting device plugin registration server" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.524589 5023 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.524606 5023 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.551988 5023 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.552123 5023 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.552133 5023 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.554930 5023 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.576056 5023 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.576186 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577311 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577536 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577805 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577905 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.577976 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579120 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579167 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579190 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579202 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579922 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.579954 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580038 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580177 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580214 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580917 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580933 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580973 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.580984 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581079 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581125 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581144 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581866 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581915 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581932 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.581919 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582049 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582102 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582122 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582582 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582597 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582824 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.582868 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583211 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583271 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583287 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583480 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583510 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.583524 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.624759 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.625234 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.626055 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.626704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.626725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.626754 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.627410 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644207 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644243 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644264 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644279 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644295 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644311 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.644326 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.645913 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.646053 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.646193 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.647784 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.648000 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.648205 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.648343 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.648444 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751328 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751380 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751404 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751430 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751453 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751476 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751499 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751520 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751544 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751570 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751592 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751611 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751668 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751689 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751707 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751924 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.751988 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752020 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752049 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752078 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752120 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752156 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752195 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752222 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752241 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752259 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752278 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752300 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752309 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.752329 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.828316 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.829657 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.829745 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.829760 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.829798 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:43 crc kubenswrapper[5023]: E0219 08:00:43.830541 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.907748 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.932227 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.951401 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dcc8f7d5d1f2b88bc5ee0900c47ae5528a38aa1fd65b5266e15d1d3cbe33d31d WatchSource:0}: Error finding container dcc8f7d5d1f2b88bc5ee0900c47ae5528a38aa1fd65b5266e15d1d3cbe33d31d: Status 404 returned error can't find the container with id dcc8f7d5d1f2b88bc5ee0900c47ae5528a38aa1fd65b5266e15d1d3cbe33d31d Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.955122 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.966333 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8b1188f056cb7725d4427ab0bd4a15b82db33ecb0077e5fb27ea3b3669e431e1 WatchSource:0}: Error finding container 8b1188f056cb7725d4427ab0bd4a15b82db33ecb0077e5fb27ea3b3669e431e1: Status 404 returned error can't find the container with id 8b1188f056cb7725d4427ab0bd4a15b82db33ecb0077e5fb27ea3b3669e431e1 Feb 19 08:00:43 crc kubenswrapper[5023]: W0219 08:00:43.976680 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c2aff4377b4d6aeb72574d84b3c6b3203216d7fcbcec191c3577d32abd167bd9 WatchSource:0}: Error finding container c2aff4377b4d6aeb72574d84b3c6b3203216d7fcbcec191c3577d32abd167bd9: Status 404 returned error can't find the container with id c2aff4377b4d6aeb72574d84b3c6b3203216d7fcbcec191c3577d32abd167bd9 Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.977415 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:43 crc kubenswrapper[5023]: I0219 08:00:43.982784 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.003635 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6eb05e04a36084bcff19b3ab94f97a72e828183a2b996bebb66cfd042ed4dda1 WatchSource:0}: Error finding container 6eb05e04a36084bcff19b3ab94f97a72e828183a2b996bebb66cfd042ed4dda1: Status 404 returned error can't find the container with id 6eb05e04a36084bcff19b3ab94f97a72e828183a2b996bebb66cfd042ed4dda1 Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.006323 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-06741763fa7b4601ca540c18cac3c479255569f3e1d17b1aec4458800698fcf1 WatchSource:0}: Error finding container 06741763fa7b4601ca540c18cac3c479255569f3e1d17b1aec4458800698fcf1: Status 404 returned error can't find the container with id 06741763fa7b4601ca540c18cac3c479255569f3e1d17b1aec4458800698fcf1 Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.026147 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.231325 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.236075 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.236128 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.236140 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.236171 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.236848 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.244554 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.244681 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.421022 5023 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.421980 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:18:03.383339446 +0000 UTC Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.440795 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.440889 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.488067 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c2aff4377b4d6aeb72574d84b3c6b3203216d7fcbcec191c3577d32abd167bd9"} Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.492943 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b1188f056cb7725d4427ab0bd4a15b82db33ecb0077e5fb27ea3b3669e431e1"} Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.494350 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dcc8f7d5d1f2b88bc5ee0900c47ae5528a38aa1fd65b5266e15d1d3cbe33d31d"} Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.495983 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"06741763fa7b4601ca540c18cac3c479255569f3e1d17b1aec4458800698fcf1"} Feb 19 08:00:44 crc kubenswrapper[5023]: I0219 08:00:44.496986 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6eb05e04a36084bcff19b3ab94f97a72e828183a2b996bebb66cfd042ed4dda1"} Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.690411 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.690526 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.827399 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Feb 19 08:00:44 crc kubenswrapper[5023]: W0219 08:00:44.995687 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:44 crc kubenswrapper[5023]: E0219 08:00:44.995802 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.037145 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.039025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.039076 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.039087 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.039117 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:45 crc kubenswrapper[5023]: E0219 08:00:45.043049 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.420842 5023 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.422974 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:02:34.050978773 +0000 UTC Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.498005 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:00:45 crc kubenswrapper[5023]: E0219 08:00:45.499434 5023 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.505132 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f" exitCode=0 Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.505206 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.505294 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.506687 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.506743 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.506756 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.508191 5023 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="71cd3802a357f3e4bb29809049dee4e77407b8e6d5ded937afa1d9666a4d0ed9" exitCode=0 Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.508234 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"71cd3802a357f3e4bb29809049dee4e77407b8e6d5ded937afa1d9666a4d0ed9"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.508332 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.509022 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.509237 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.509261 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.509272 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510084 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510122 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510744 5023 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c" exitCode=0 Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510769 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.510862 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.511692 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.511726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.511738 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.512519 5023 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91" exitCode=0 Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.512590 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.512590 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.513348 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.513376 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.513387 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515111 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515133 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515143 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515167 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129"} Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515191 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515947 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515983 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:45 crc kubenswrapper[5023]: I0219 08:00:45.515995 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: W0219 08:00:46.107989 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:46 crc kubenswrapper[5023]: E0219 08:00:46.108350 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Feb 19 08:00:46 crc kubenswrapper[5023]: E0219 08:00:46.413521 5023 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189596fd5fc97ace default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:00:43.41809019 +0000 UTC m=+1.075209138,LastTimestamp:2026-02-19 08:00:43.41809019 +0000 UTC m=+1.075209138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.420974 5023 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.423369 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:11:41.402145246 +0000 UTC Feb 19 08:00:46 crc kubenswrapper[5023]: E0219 08:00:46.428249 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="3.2s" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.519124 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.519172 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.519185 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.519210 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.520162 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.520193 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.520205 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521750 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521789 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521804 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521819 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521832 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.521767 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.522299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.522318 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.522327 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.523475 5023 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="723ed6bf399ae645db9a0125f64e0128c3427de7ee9d38cc8d5da31b39cad5d3" exitCode=0 Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.523516 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"723ed6bf399ae645db9a0125f64e0128c3427de7ee9d38cc8d5da31b39cad5d3"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.523609 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.524103 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.524119 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.524127 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.525593 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.525605 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526066 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8cf587f639d4701b513756716fdf96f367c2345e56577ce8ec77104b7fb0ca89"} Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526366 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526391 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526400 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526401 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526419 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.526428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.644015 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.645275 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.645316 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.645328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.645348 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:46 crc kubenswrapper[5023]: E0219 08:00:46.645984 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Feb 19 08:00:46 crc kubenswrapper[5023]: I0219 08:00:46.703891 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.280057 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.423712 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:05:08.06816064 +0000 UTC Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.530973 5023 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a041c15e2b611a1c768a9f5c600921a13a3b66ac583acc2700f51fdd7d769a33" exitCode=0 Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531021 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a041c15e2b611a1c768a9f5c600921a13a3b66ac583acc2700f51fdd7d769a33"} Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531107 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531149 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531166 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531216 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531227 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531290 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.531170 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.532399 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.532436 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.532450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533020 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533042 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533054 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533056 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533093 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533119 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533164 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533756 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533781 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:47 crc kubenswrapper[5023]: I0219 08:00:47.533789 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.423981 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:08:34.751997366 +0000 UTC Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537320 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fdec8071b33b81b45b8311ab2409fe0b81fc5a8ec74281a84f1e6c14c1e326eb"} Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537372 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2e54228180e29143d5e1fbfbc3c591c81c12c88daea80a816b89425708b05a1"} Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537394 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8eed2d34e5ab929310b71ca8b920ff9aba2aa9fe1786a442fc3384584aa9d81b"} Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537411 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e29e9cee4abe68f56c8b85ff3946c387370c075886262fb6879734ef8b740b14"} Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537343 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.537473 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.538685 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.538737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:48 crc kubenswrapper[5023]: I0219 08:00:48.538754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.424837 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:10:14.626233045 +0000 UTC Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.546315 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"98f88e1ea46064c9898217d14e65e70280c725bb6599a3d5f517eb60f40c03e9"} Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.546413 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.547500 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.547564 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.547584 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.704172 5023 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.704265 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.796718 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.846366 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.848599 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.848709 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.848735 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:49 crc kubenswrapper[5023]: I0219 08:00:49.848780 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.216264 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.216465 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.216520 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.217904 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.217951 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.217962 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.425155 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:51:25.682829123 +0000 UTC Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.549548 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.550655 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.550686 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.550694 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.941290 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.941486 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.942864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.942900 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.942911 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:50 crc kubenswrapper[5023]: I0219 08:00:50.952788 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.426365 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:56:20.310000419 +0000 UTC Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.552121 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.553235 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.553266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.553277 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.858678 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.859008 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.860674 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.860730 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:51 crc kubenswrapper[5023]: I0219 08:00:51.860748 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.427513 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:15:49.372865378 +0000 UTC Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.445775 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.555985 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.557499 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.557549 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:52 crc kubenswrapper[5023]: I0219 08:00:52.557568 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.086695 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.279901 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.280181 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.281844 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.282032 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.282210 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.428563 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:32:00.536102727 +0000 UTC Feb 19 08:00:53 crc kubenswrapper[5023]: E0219 08:00:53.555101 5023 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.558355 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.559698 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.559726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.559735 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.867917 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.868383 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.870394 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.870478 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:53 crc kubenswrapper[5023]: I0219 08:00:53.870497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:54 crc kubenswrapper[5023]: I0219 08:00:54.429710 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:09:06.858626702 +0000 UTC Feb 19 08:00:55 crc kubenswrapper[5023]: I0219 08:00:55.430372 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:12:58.766335841 +0000 UTC Feb 19 08:00:56 crc kubenswrapper[5023]: I0219 08:00:56.431249 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:33:57.242066365 +0000 UTC Feb 19 08:00:57 crc kubenswrapper[5023]: W0219 08:00:57.189246 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.189340 5023 trace.go:236] Trace[1280208449]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:00:47.188) (total time: 10001ms): Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[1280208449]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:00:57.189) Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[1280208449]: [10.001185205s] [10.001185205s] END Feb 19 08:00:57 crc kubenswrapper[5023]: E0219 08:00:57.189359 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 08:00:57 crc kubenswrapper[5023]: W0219 08:00:57.248175 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.248283 5023 trace.go:236] Trace[1668833]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:00:47.246) (total time: 10002ms): Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[1668833]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:00:57.248) Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[1668833]: [10.002029287s] [10.002029287s] END Feb 19 08:00:57 crc kubenswrapper[5023]: E0219 08:00:57.248312 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.422434 5023 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.432634 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:53:37.381896335 +0000 UTC Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.707730 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.707978 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.709448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.709531 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.709555 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.780529 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 08:00:57 crc kubenswrapper[5023]: W0219 08:00:57.827642 5023 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.827737 5023 trace.go:236] Trace[36435845]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:00:47.826) (total time: 10001ms): Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[36435845]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:00:57.827) Feb 19 08:00:57 crc kubenswrapper[5023]: Trace[36435845]: [10.00135964s] [10.00135964s] END Feb 19 08:00:57 crc kubenswrapper[5023]: E0219 08:00:57.827758 5023 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.950181 5023 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.950241 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.967422 5023 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 08:00:57 crc kubenswrapper[5023]: I0219 08:00:57.967504 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.433342 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:56:06.352153107 +0000 UTC Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.572147 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.573252 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.573275 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.573283 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:58 crc kubenswrapper[5023]: I0219 08:00:58.586351 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.434580 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:38:16.805060581 +0000 UTC Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.574179 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.575349 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.575401 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.575417 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.704748 5023 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:00:59 crc kubenswrapper[5023]: I0219 08:00:59.704867 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.221099 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.221245 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.222285 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.222326 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.222338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.228466 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.435008 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:12:38.186550316 +0000 UTC Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.577304 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.578246 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.578269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.578278 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:00 crc kubenswrapper[5023]: I0219 08:01:00.685368 5023 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:01 crc kubenswrapper[5023]: I0219 08:01:01.435912 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:37:15.438681426 +0000 UTC Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.436315 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:44:52.60926026 +0000 UTC Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.451163 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.451431 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.452495 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.452531 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.452541 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:02 crc kubenswrapper[5023]: E0219 08:01:02.969029 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.971297 5023 trace.go:236] Trace[1370960127]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 08:00:51.544) (total time: 11426ms): Feb 19 08:01:02 crc kubenswrapper[5023]: Trace[1370960127]: ---"Objects listed" error: 11426ms (08:01:02.971) Feb 19 08:01:02 crc kubenswrapper[5023]: Trace[1370960127]: [11.426544578s] [11.426544578s] END Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.971322 5023 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:02 crc kubenswrapper[5023]: E0219 08:01:02.974030 5023 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976489 5023 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58108->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976535 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58108->192.168.126.11:17697: read: connection reset by peer" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976495 5023 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58096->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976633 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58096->192.168.126.11:17697: read: connection reset by peer" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976948 5023 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.976972 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.985895 5023 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 08:01:02 crc kubenswrapper[5023]: I0219 08:01:02.993984 5023 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.033968 5023 csr.go:261] certificate signing request csr-vhp22 is approved, waiting to be issued Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.050026 5023 csr.go:257] certificate signing request csr-vhp22 is issued Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.162805 5023 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.292672 5023 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 08:01:03 crc kubenswrapper[5023]: W0219 08:01:03.292897 5023 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 08:01:03 crc kubenswrapper[5023]: W0219 08:01:03.292950 5023 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.292896 5023 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.153:43296->38.102.83.153:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189596fd8059a1b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:00:43.96440824 +0000 UTC m=+1.621527188,LastTimestamp:2026-02-19 08:00:43.96440824 +0000 UTC m=+1.621527188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.321323 5023 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.421713 5023 apiserver.go:52] "Watching apiserver" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.426457 5023 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.426692 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427078 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427095 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427149 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427232 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427424 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.427606 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.427682 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.427889 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.427962 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.428967 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.430091 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.430465 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.430996 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.431438 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.431453 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.431704 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.431990 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.432343 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.436972 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:28:16.145570878 +0000 UTC Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.463238 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.475923 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.493083 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.506428 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.520113 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.522607 5023 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.535790 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.546556 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.558427 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.590454 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.590566 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.590974 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591315 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591362 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591396 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591422 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591595 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591726 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591764 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591794 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591827 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.591875 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:04.09183271 +0000 UTC m=+21.748951858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591875 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591890 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591909 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591936 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.591985 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592016 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592041 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592089 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592114 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592137 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592161 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592186 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592214 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592239 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592250 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592298 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592324 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592330 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592370 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592390 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592402 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592483 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592585 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592635 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592592 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592655 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592689 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592723 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592749 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592753 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592807 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592827 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592845 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592863 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592882 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592899 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592893 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592916 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592936 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592957 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593008 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593025 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593043 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593061 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593080 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593099 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593115 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593138 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593159 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593177 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593197 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593217 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593234 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593271 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593287 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593302 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593317 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593356 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593397 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593413 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593430 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593446 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593461 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593479 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593494 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593512 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593530 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593546 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593563 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593578 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593594 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593636 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593653 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593669 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593687 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593723 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593740 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593758 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593779 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593801 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593816 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593833 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593851 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593891 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593908 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593926 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593944 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593963 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593981 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594000 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594020 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594039 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594057 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594074 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594092 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594108 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594126 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594142 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594162 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594177 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594195 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594211 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594247 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594264 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594280 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594297 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594316 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594335 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594357 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594381 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594405 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594421 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594440 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594459 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594475 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594493 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594512 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594531 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594547 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594565 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594583 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594599 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594860 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594883 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594901 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594919 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594935 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594962 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594979 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595025 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595042 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595059 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595076 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595095 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595112 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595130 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595153 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595170 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595188 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595205 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595222 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595660 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595681 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595698 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595715 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595734 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595751 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595771 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595788 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595806 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595823 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595839 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595857 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592915 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.592963 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593026 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593118 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593143 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593351 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593407 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593609 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.593747 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596222 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596235 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595874 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596349 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596404 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596445 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596477 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596564 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596606 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596669 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596718 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596758 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596786 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596819 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596853 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596888 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596919 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596964 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597009 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597092 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597118 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597148 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597174 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597201 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597231 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597355 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597387 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597415 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597461 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597502 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597531 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597556 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597581 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597606 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597652 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597681 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597709 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597740 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597769 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597799 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597826 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597853 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597879 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597972 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598013 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598048 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598076 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598106 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598137 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598165 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598196 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598230 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598257 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598284 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598310 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598338 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598457 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598547 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598565 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598579 5023 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598593 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598608 5023 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598662 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598677 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598692 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598706 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598726 5023 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598740 5023 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598755 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598769 5023 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598783 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598796 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598811 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598825 5023 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598839 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598857 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598871 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598883 5023 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598896 5023 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598912 5023 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598938 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.600754 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.602048 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.596961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597056 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594268 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594483 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594787 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594785 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595148 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595192 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595402 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595575 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.595856 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597234 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.594100 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597468 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597698 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597725 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597922 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.597980 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598373 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598706 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598898 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.598896 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.599573 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.599788 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.600054 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.600949 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.600990 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.601117 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.602990 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.601419 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.601568 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.601996 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.602310 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.602473 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.602927 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603214 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603454 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603484 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603705 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603704 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603790 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.603954 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604155 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604173 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604241 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604586 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604815 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604826 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604821 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.604830 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605254 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605299 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605414 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605504 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605773 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.605794 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606004 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606021 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606144 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606281 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606580 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.606798 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.607374 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.607473 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.607770 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.608181 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.608204 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.608251 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.608903 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.608968 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.609050 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:04.109020985 +0000 UTC m=+21.766139943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.609145 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.609405 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.609422 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.609531 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.609677 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610088 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610107 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610243 5023 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610458 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610585 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.610865 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.611263 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.611829 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.612143 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.612425 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.612733 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.612993 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.613260 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.613951 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614156 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614311 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf" exitCode=255 Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614344 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf"} Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614414 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614804 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.614985 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.615556 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.615829 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.615998 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.616143 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.616435 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.616440 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.616815 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.616832 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.617328 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.617772 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.617976 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618024 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618256 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618271 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618276 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618495 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618823 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.618951 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.619107 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.619204 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.619840 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620069 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620333 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620394 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620549 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620578 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620808 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621152 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621215 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621259 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621507 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621519 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621833 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621921 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.621873 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.622166 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.622182 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.622309 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.623824 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628926 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.623851 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.624324 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.624728 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.625051 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.625107 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.625447 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.625690 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.629025 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:04.129000113 +0000 UTC m=+21.786119301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.626302 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.626212 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.626544 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.626956 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.627645 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.627740 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.627864 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628272 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628274 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628588 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628731 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.628755 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.620862 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.630755 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.633102 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.633109 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.634230 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.634695 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.634899 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.634985 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.635078 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.635531 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.635668 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.635930 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.636281 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.636106 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.636858 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.637441 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.637596 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.637713 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.637891 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:04.137853937 +0000 UTC m=+21.794972895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.639096 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.639271 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.639672 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.639695 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.639821 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:03 crc kubenswrapper[5023]: E0219 08:01:03.639886 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:04.13986039 +0000 UTC m=+21.796979338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.640402 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.641370 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.641576 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.641540 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.641808 5023 scope.go:117] "RemoveContainer" containerID="a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.643402 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.657034 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.657761 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.658430 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.658684 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.660407 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.662729 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.667158 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.672117 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.680351 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.681837 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.683748 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.686360 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.693855 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699487 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699534 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699638 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699651 5023 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699659 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699669 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699678 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699687 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.699695 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701348 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701361 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701370 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701379 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701405 5023 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701416 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701428 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701438 5023 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701447 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701456 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701482 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701492 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701501 5023 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701509 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701517 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701526 5023 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701534 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701556 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701564 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701574 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701585 5023 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701595 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701604 5023 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701613 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701645 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701653 5023 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701663 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701671 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701680 5023 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701689 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701698 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701723 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701733 5023 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701742 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701751 5023 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701759 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701769 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701792 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701801 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701810 5023 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701817 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701826 5023 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701834 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701842 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701852 5023 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701878 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701889 5023 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701899 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701911 5023 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701923 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701954 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701965 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701975 5023 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701983 5023 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.701991 5023 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702000 5023 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702007 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702031 5023 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702040 5023 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702048 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702056 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702064 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702073 5023 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702081 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702089 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702115 5023 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702130 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702141 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702152 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702162 5023 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702188 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702199 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702208 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702216 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702224 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702232 5023 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702241 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702264 5023 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702272 5023 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.702281 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703542 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703523 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703554 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703674 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703693 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703706 5023 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703747 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703760 5023 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703771 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703783 5023 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703794 5023 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703804 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703812 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703820 5023 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703828 5023 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703836 5023 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703843 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703852 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703862 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703874 5023 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703887 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703897 5023 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703905 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703913 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703921 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703930 5023 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703941 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703953 5023 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703963 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703980 5023 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703989 5023 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703998 5023 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704006 5023 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704013 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704022 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704030 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704038 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704046 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704055 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704064 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.704073 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.703669 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705672 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705693 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705704 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705713 5023 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705722 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705742 5023 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705750 5023 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705768 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705777 5023 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705785 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705793 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705801 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705809 5023 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705818 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705826 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705835 5023 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705844 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705857 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705869 5023 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705879 5023 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705888 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705897 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705905 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705914 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705924 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705933 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705942 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705951 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705960 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705969 5023 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705979 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705988 5023 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.705997 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706005 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706014 5023 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706023 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706031 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706040 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706049 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706057 5023 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706066 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706074 5023 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.706083 5023 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.709139 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.719533 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.732641 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.742011 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.743599 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.753070 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.756971 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.760652 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 08:01:03 crc kubenswrapper[5023]: I0219 08:01:03.767665 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:03 crc kubenswrapper[5023]: W0219 08:01:03.773271 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6141a6aa552056067790256f9eb5e8525bc73c85f56f07d079769254c3bfbf3f WatchSource:0}: Error finding container 6141a6aa552056067790256f9eb5e8525bc73c85f56f07d079769254c3bfbf3f: Status 404 returned error can't find the container with id 6141a6aa552056067790256f9eb5e8525bc73c85f56f07d079769254c3bfbf3f Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.051108 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 07:56:03 +0000 UTC, rotation deadline is 2026-12-25 12:21:20.087426367 +0000 UTC Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.051203 5023 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7420h20m16.036226795s for next certificate rotation Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.094264 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zbzlq"] Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.094646 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.096786 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.096970 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.097302 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.109834 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.109956 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.110077 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.110146 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:05.11011745 +0000 UTC m=+22.767236388 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.110175 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:05.110166391 +0000 UTC m=+22.767285339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.111895 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.122443 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.133223 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.195634 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.210881 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.210928 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.210948 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.210979 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/46cb8e54-c22c-411b-ac49-e08f13849463-hosts-file\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.210996 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkk7\" (UniqueName: \"kubernetes.io/projected/46cb8e54-c22c-411b-ac49-e08f13849463-kube-api-access-szkk7\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211144 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211157 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211167 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211204 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:05.21119136 +0000 UTC m=+22.868310308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211250 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211260 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211266 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211286 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:05.211279682 +0000 UTC m=+22.868398630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211316 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: E0219 08:01:04.211335 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:05.211329994 +0000 UTC m=+22.868448942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.220716 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.230417 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.253694 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.291359 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.311924 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/46cb8e54-c22c-411b-ac49-e08f13849463-hosts-file\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.311968 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkk7\" (UniqueName: \"kubernetes.io/projected/46cb8e54-c22c-411b-ac49-e08f13849463-kube-api-access-szkk7\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.312081 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/46cb8e54-c22c-411b-ac49-e08f13849463-hosts-file\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.339566 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkk7\" (UniqueName: \"kubernetes.io/projected/46cb8e54-c22c-411b-ac49-e08f13849463-kube-api-access-szkk7\") pod \"node-resolver-zbzlq\" (UID: \"46cb8e54-c22c-411b-ac49-e08f13849463\") " pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.409080 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbzlq" Feb 19 08:01:04 crc kubenswrapper[5023]: W0219 08:01:04.420427 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46cb8e54_c22c_411b_ac49_e08f13849463.slice/crio-76967d9f0ce7fe4ed3fbc75e0c315caafbaf3df8db0c4acb68d9a941b1583538 WatchSource:0}: Error finding container 76967d9f0ce7fe4ed3fbc75e0c315caafbaf3df8db0c4acb68d9a941b1583538: Status 404 returned error can't find the container with id 76967d9f0ce7fe4ed3fbc75e0c315caafbaf3df8db0c4acb68d9a941b1583538 Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.437639 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:01:34.980856343 +0000 UTC Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.474492 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-444kx"] Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.481339 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-74jld"] Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.481504 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.484767 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.485034 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.485150 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.485259 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.485538 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-t9v9m"] Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.485786 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.486140 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.489366 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.491012 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.491073 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.491124 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.494704 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.494912 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.496954 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.497872 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.506160 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.514236 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.526898 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.538680 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.551317 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.563745 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.575225 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.585522 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.595397 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.606279 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.616877 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-multus\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.616921 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-multus-daemon-config\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.616973 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b3e4d325-7b2d-4177-b955-cc85093996a1-rootfs\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617000 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3e4d325-7b2d-4177-b955-cc85093996a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617027 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-hostroot\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617047 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-conf-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617064 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-system-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617081 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-etc-kubernetes\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617095 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b3e4d325-7b2d-4177-b955-cc85093996a1-proxy-tls\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617114 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-multus-certs\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617130 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-socket-dir-parent\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617145 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-netns\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617160 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-os-release\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617176 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-bin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617194 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-binary-copy\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617211 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxnk\" (UniqueName: \"kubernetes.io/projected/b3e4d325-7b2d-4177-b955-cc85093996a1-kube-api-access-vxxnk\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617259 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-system-cni-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617275 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cnibin\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617291 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-os-release\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617309 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617354 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-k8s-cni-cncf-io\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617372 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-cni-binary-copy\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617402 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-cnibin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617426 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnbp\" (UniqueName: \"kubernetes.io/projected/c2403771-cd0a-411c-8666-bdeb65e9ca0d-kube-api-access-9nnbp\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617443 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-kubelet\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617462 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9mb\" (UniqueName: \"kubernetes.io/projected/c4610eec-5318-4742-b598-a88feb94cf7d-kube-api-access-9z9mb\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617494 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.617517 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.630015 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.632040 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.632917 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.635405 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.636986 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.637025 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.637038 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6141a6aa552056067790256f9eb5e8525bc73c85f56f07d079769254c3bfbf3f"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.641264 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbzlq" event={"ID":"46cb8e54-c22c-411b-ac49-e08f13849463","Type":"ContainerStarted","Data":"76967d9f0ce7fe4ed3fbc75e0c315caafbaf3df8db0c4acb68d9a941b1583538"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.642892 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.642981 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1e4aa366a358ace70e212e6f4db125cdc0cd634500ecd2f7ef2d706da1807893"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.643823 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d68e2a2ad56c7a354c8a2491369034b3558f1848ffbdc6b45032548c89929a54"} Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.655303 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.665787 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.679214 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.690145 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.703338 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.718245 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-multus\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.718424 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-multus-daemon-config\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b3e4d325-7b2d-4177-b955-cc85093996a1-rootfs\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.718341 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-multus\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719335 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-multus-daemon-config\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719496 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3e4d325-7b2d-4177-b955-cc85093996a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719577 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b3e4d325-7b2d-4177-b955-cc85093996a1-rootfs\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719691 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-hostroot\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719718 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-conf-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719783 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-system-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719795 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-hostroot\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719807 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-etc-kubernetes\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719844 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-conf-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719847 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-multus-certs\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719877 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b3e4d325-7b2d-4177-b955-cc85093996a1-proxy-tls\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719905 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-netns\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719909 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-multus-certs\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719879 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-etc-kubernetes\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719929 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-socket-dir-parent\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719970 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-os-release\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719976 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-socket-dir-parent\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719880 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-system-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720010 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-bin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719970 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-netns\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.719991 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-cni-bin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720058 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-binary-copy\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720077 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxnk\" (UniqueName: \"kubernetes.io/projected/b3e4d325-7b2d-4177-b955-cc85093996a1-kube-api-access-vxxnk\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720107 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-k8s-cni-cncf-io\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720127 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-system-cni-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720149 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cnibin\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720163 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-os-release\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720179 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720203 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-cni-binary-copy\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720219 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-cnibin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720227 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-os-release\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720235 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnbp\" (UniqueName: \"kubernetes.io/projected/c2403771-cd0a-411c-8666-bdeb65e9ca0d-kube-api-access-9nnbp\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cnibin\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720276 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-kubelet\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720294 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9mb\" (UniqueName: \"kubernetes.io/projected/c4610eec-5318-4742-b598-a88feb94cf7d-kube-api-access-9z9mb\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720312 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720327 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720562 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-run-k8s-cni-cncf-io\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720558 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-os-release\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720647 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-system-cni-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720767 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-cnibin\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720819 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2403771-cd0a-411c-8666-bdeb65e9ca0d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720832 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-multus-cni-dir\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.720844 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4610eec-5318-4742-b598-a88feb94cf7d-host-var-lib-kubelet\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.721224 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.721320 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4610eec-5318-4742-b598-a88feb94cf7d-cni-binary-copy\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.721709 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2403771-cd0a-411c-8666-bdeb65e9ca0d-cni-binary-copy\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.722055 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3e4d325-7b2d-4177-b955-cc85093996a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.727937 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.731679 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b3e4d325-7b2d-4177-b955-cc85093996a1-proxy-tls\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.744298 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9mb\" (UniqueName: \"kubernetes.io/projected/c4610eec-5318-4742-b598-a88feb94cf7d-kube-api-access-9z9mb\") pod \"multus-t9v9m\" (UID: \"c4610eec-5318-4742-b598-a88feb94cf7d\") " pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.748579 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.752034 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnbp\" (UniqueName: \"kubernetes.io/projected/c2403771-cd0a-411c-8666-bdeb65e9ca0d-kube-api-access-9nnbp\") pod \"multus-additional-cni-plugins-74jld\" (UID: \"c2403771-cd0a-411c-8666-bdeb65e9ca0d\") " pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.752351 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxnk\" (UniqueName: \"kubernetes.io/projected/b3e4d325-7b2d-4177-b955-cc85093996a1-kube-api-access-vxxnk\") pod \"machine-config-daemon-444kx\" (UID: \"b3e4d325-7b2d-4177-b955-cc85093996a1\") " pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.767005 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.783117 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.797397 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.800319 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.805154 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t9v9m" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.815912 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74jld" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.830445 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: W0219 08:01:04.838019 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4610eec_5318_4742_b598_a88feb94cf7d.slice/crio-6fcdd6cf40a64adab977600086441e1bc26deb71dc2c83c07118742100db1ed6 WatchSource:0}: Error finding container 6fcdd6cf40a64adab977600086441e1bc26deb71dc2c83c07118742100db1ed6: Status 404 returned error can't find the container with id 6fcdd6cf40a64adab977600086441e1bc26deb71dc2c83c07118742100db1ed6 Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.842260 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrqg4"] Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.843079 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.845777 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.845881 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.845996 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.846291 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.846302 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.847386 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.847385 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.855546 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.867013 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.882723 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.894503 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.906399 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.920936 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.935865 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.952183 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.965067 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.982701 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:04 crc kubenswrapper[5023]: I0219 08:01:04.999535 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.015970 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022319 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022364 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022387 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022403 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wtn\" (UniqueName: \"kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022421 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022438 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022454 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022468 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022484 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022516 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022530 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022552 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022565 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022580 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022599 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022613 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022650 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022667 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022685 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.022709 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.042433 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.057242 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.073689 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.089135 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.112085 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123239 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123339 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123367 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123388 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123424 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123442 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123459 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123476 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123492 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123509 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123527 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123556 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123579 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123598 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123638 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123654 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wtn\" (UniqueName: \"kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123669 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123708 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123723 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123738 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.123768 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.124428 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:07.124391118 +0000 UTC m=+24.781510066 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124512 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124556 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.124661 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124676 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.124699 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:07.124692655 +0000 UTC m=+24.781811603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124738 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124771 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124827 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.124856 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125108 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125140 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125162 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125187 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125214 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125236 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125262 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125288 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125315 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.125922 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.132075 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.154391 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wtn\" (UniqueName: \"kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn\") pod \"ovnkube-node-mrqg4\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.154827 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.156198 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: W0219 08:01:05.179133 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd9177d9_fb83_4fdf_bc43_c8cc552e8e48.slice/crio-b0af1be7e998ebbb197246f6208508faa6925dd3cce41a15fb1cadf6d88df52a WatchSource:0}: Error finding container b0af1be7e998ebbb197246f6208508faa6925dd3cce41a15fb1cadf6d88df52a: Status 404 returned error can't find the container with id b0af1be7e998ebbb197246f6208508faa6925dd3cce41a15fb1cadf6d88df52a Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.186064 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.218202 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.224940 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.225094 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.225222 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225153 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225384 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225410 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225427 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225250 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225488 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:07.225466899 +0000 UTC m=+24.882585847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225545 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:07.225523931 +0000 UTC m=+24.882642879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225733 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.225812 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.226021 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:07.226006353 +0000 UTC m=+24.883125421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.239067 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.438204 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:16:57.928789379 +0000 UTC Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.477589 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.477737 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.477791 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.477932 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.478012 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:05 crc kubenswrapper[5023]: E0219 08:01:05.478068 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.481655 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.482164 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.483094 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.483737 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.484325 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.484891 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.487039 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.487813 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.489011 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.489632 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.490278 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.491571 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.492183 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.493840 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.494518 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.498524 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.499537 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.500045 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.501196 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.501847 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.502466 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.503875 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.504311 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.505504 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.506140 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.507551 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.508400 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.509081 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.510331 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.511022 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.512021 5023 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.512128 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.514013 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.515069 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.515755 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.519069 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.520474 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.521275 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.523436 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.524363 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.525587 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.526345 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.527533 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.528293 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.529397 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.530142 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.531396 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.532506 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.533703 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.534367 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.535470 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.536042 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.536593 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.537460 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.648782 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" exitCode=0 Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.648855 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.648886 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"b0af1be7e998ebbb197246f6208508faa6925dd3cce41a15fb1cadf6d88df52a"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.650746 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerStarted","Data":"35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.650798 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerStarted","Data":"6fcdd6cf40a64adab977600086441e1bc26deb71dc2c83c07118742100db1ed6"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.653015 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbzlq" event={"ID":"46cb8e54-c22c-411b-ac49-e08f13849463","Type":"ContainerStarted","Data":"ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.655096 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4" exitCode=0 Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.655160 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.655195 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerStarted","Data":"437f26656913c4c4e876884ec42ac3cb795da5ed19d1927a015c2e22432d8404"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.657210 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.657241 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.657254 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"74b8d66d38fd788241a0720df71e8993292de208b7931a3ec246c57888d9ad67"} Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.666279 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.678316 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.702799 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.721702 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.736229 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.746436 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.758468 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.780994 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.805495 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.821451 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.846799 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.864501 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.883726 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.900248 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.915090 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.938810 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.955428 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.966128 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.978444 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:05 crc kubenswrapper[5023]: I0219 08:01:05.990287 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:05Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.016222 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.079712 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.100450 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.138542 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.439252 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:21:39.345336367 +0000 UTC Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.665912 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.665955 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.665965 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.665976 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.667470 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.669757 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c" exitCode=0 Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.669842 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c"} Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.682601 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.704104 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.711944 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.716796 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.721119 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.723700 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.737099 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.749377 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.767669 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.783945 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.798795 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.814207 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.836088 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.850355 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.870479 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.884217 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.899408 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.916741 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.928555 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.945002 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.960086 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.976707 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:06 crc kubenswrapper[5023]: I0219 08:01:06.991535 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:06Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.006194 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.030783 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.081296 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.114565 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.142808 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.142917 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.142975 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:11.142951122 +0000 UTC m=+28.800070070 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.143001 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.143054 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:11.143040234 +0000 UTC m=+28.800159182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.155930 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.243460 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.243497 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.243533 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243640 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243665 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243691 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243706 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243722 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:11.243703205 +0000 UTC m=+28.900822153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243654 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243761 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243760 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:11.243741896 +0000 UTC m=+28.900860854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243770 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.243807 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:11.243798567 +0000 UTC m=+28.900917515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.440295 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:16:09.508802641 +0000 UTC Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.476173 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.476214 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.476241 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.476370 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.476532 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.476586 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.682071 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4" exitCode=0 Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.682157 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4"} Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.687053 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.687127 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:01:07 crc kubenswrapper[5023]: E0219 08:01:07.695655 5023 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.697793 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.714228 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.727479 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.750807 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.790958 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.819291 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.841469 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.854338 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.865931 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.877538 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.889826 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.900923 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:07 crc kubenswrapper[5023]: I0219 08:01:07.915347 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:07Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.440653 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:22:49.182757732 +0000 UTC Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.694126 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59" exitCode=0 Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.694200 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59"} Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.714819 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.735461 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.751912 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.778093 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.799925 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.816474 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.829794 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.844471 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.856294 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.869452 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.882954 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.895224 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:08 crc kubenswrapper[5023]: I0219 08:01:08.913568 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:08Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.375042 5023 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.377339 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.377395 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.377414 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.377557 5023 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.383802 5023 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.384077 5023 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.385349 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.385382 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.385394 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.385409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.385420 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.399704 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.402879 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.402937 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.402958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.402973 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.402983 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.424175 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.428687 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.428739 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.428747 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.428762 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.428775 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.440935 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:30:14.497951477 +0000 UTC Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.441105 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.444040 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.444067 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.444077 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.444093 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.444105 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.455498 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.458513 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.458572 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.458584 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.458606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.458647 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.471082 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.471190 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.472996 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.473030 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.473042 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.473060 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.473073 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.476274 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.476297 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.476406 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.476488 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.476583 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:09 crc kubenswrapper[5023]: E0219 08:01:09.476856 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.574721 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.574756 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.574766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.574792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.574802 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.676255 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.676287 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.676319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.676334 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.676342 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.702554 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.704859 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2" exitCode=0 Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.704907 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.741223 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.760062 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.773571 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.780853 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.780927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.780948 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.780979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.780998 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.822422 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-74fm2"] Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.822598 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.822799 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.826216 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.826353 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.826252 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.827645 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.838545 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.854115 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.873053 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.884455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.884491 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.884501 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.884516 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.884527 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.895023 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.909981 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.935740 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.953440 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.967071 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.975549 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-serviceca\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.975653 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-host\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.975707 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bj5\" (UniqueName: \"kubernetes.io/projected/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-kube-api-access-64bj5\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.979919 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.989561 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.989601 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.989613 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.989658 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.989702 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:09Z","lastTransitionTime":"2026-02-19T08:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:09 crc kubenswrapper[5023]: I0219 08:01:09.994801 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:09Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.010808 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.025396 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.037143 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.048531 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.064694 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.076437 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-serviceca\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.076494 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-host\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.076589 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bj5\" (UniqueName: \"kubernetes.io/projected/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-kube-api-access-64bj5\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.076829 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-host\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.077531 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-serviceca\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.084337 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.094370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.094507 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.094647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.094770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.094862 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.099356 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.103989 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bj5\" (UniqueName: \"kubernetes.io/projected/0f96bf9d-2c05-444e-9efa-2f6f0ab87de3-kube-api-access-64bj5\") pod \"node-ca-74fm2\" (UID: \"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\") " pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.119553 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.133235 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.147993 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-74fm2" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.149060 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: W0219 08:01:10.163063 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f96bf9d_2c05_444e_9efa_2f6f0ab87de3.slice/crio-fe32022af13257233fe0e66a8db1c56d088dffeb093feb93da904b3d9593cc9d WatchSource:0}: Error finding container fe32022af13257233fe0e66a8db1c56d088dffeb093feb93da904b3d9593cc9d: Status 404 returned error can't find the container with id fe32022af13257233fe0e66a8db1c56d088dffeb093feb93da904b3d9593cc9d Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.166419 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.181122 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.197169 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.197197 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.197206 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.197218 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.197228 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.203252 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.299373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.299449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.299469 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.299500 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.299526 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.402657 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.402725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.402741 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.402764 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.402780 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.441324 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:22:35.890657675 +0000 UTC Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.504784 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.504815 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.504823 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.504835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.504844 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.608731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.609121 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.609131 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.609148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.609164 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.711380 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.711431 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.711442 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.711463 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.711476 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.713448 5023 generic.go:334] "Generic (PLEG): container finished" podID="c2403771-cd0a-411c-8666-bdeb65e9ca0d" containerID="1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e" exitCode=0 Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.713556 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerDied","Data":"1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.715287 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-74fm2" event={"ID":"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3","Type":"ContainerStarted","Data":"70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.715327 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-74fm2" event={"ID":"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3","Type":"ContainerStarted","Data":"fe32022af13257233fe0e66a8db1c56d088dffeb093feb93da904b3d9593cc9d"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.727813 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.743005 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.764934 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.780830 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.796314 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.807885 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.815732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.815766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.815779 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.815795 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.815806 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.820430 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.831810 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.847058 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.862175 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.877785 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.898723 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.913017 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.918021 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.918053 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.918086 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.918109 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.918122 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:10Z","lastTransitionTime":"2026-02-19T08:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.925182 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:10 crc kubenswrapper[5023]: I0219 08:01:10.992218 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:10Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.005686 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.020884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.020928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.020937 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.020951 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.020962 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.030240 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.044057 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.054984 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.078709 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.095634 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.108849 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.123685 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.124083 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.124101 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.124126 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.124140 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.125703 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.141993 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.160419 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.177547 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.190640 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.190825 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.190988 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.191041 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.191026218 +0000 UTC m=+36.848145176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.191115 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.19110667 +0000 UTC m=+36.848225638 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.199432 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.214065 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.226586 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.226658 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.226671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.226693 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.226709 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.291833 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.291905 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.292005 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292031 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292153 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.29211934 +0000 UTC m=+36.949238328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292175 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292217 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292233 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292247 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292280 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292305 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292311 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.292287075 +0000 UTC m=+36.949406253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.292420 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.292382257 +0000 UTC m=+36.949501325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.330517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.330970 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.330982 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.330999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.331011 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.433896 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.433982 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.434004 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.434032 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.434049 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.441508 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:44:50.833491115 +0000 UTC Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.476601 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.476740 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.476785 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.477003 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.477797 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:11 crc kubenswrapper[5023]: E0219 08:01:11.478216 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.541048 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.541109 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.541123 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.541151 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.541165 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.644260 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.644313 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.644328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.644352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.644368 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.724857 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" event={"ID":"c2403771-cd0a-411c-8666-bdeb65e9ca0d","Type":"ContainerStarted","Data":"f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.732757 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.733391 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.734241 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.748768 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.748855 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.748908 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.748989 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.749010 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.752542 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.770547 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.775660 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.777224 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.788991 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.823063 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.840210 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.853136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.853192 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.853208 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.853233 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.853248 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.856127 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.872847 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.888482 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.905518 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.924509 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.939769 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.957677 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.957731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.957746 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.957770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.957785 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:11Z","lastTransitionTime":"2026-02-19T08:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.962955 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.979920 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:11 crc kubenswrapper[5023]: I0219 08:01:11.992684 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.014033 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.028974 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.050675 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.062470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.062527 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.062540 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.062562 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.062580 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.075341 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.092338 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.113957 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.136687 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.150342 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.166988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.167288 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.167411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.167501 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.167575 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.172362 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.199202 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.218771 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.234573 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.252048 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.265953 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:12Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.269690 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.269718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.269728 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.269741 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.269751 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.371719 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.371757 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.371768 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.371786 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.371794 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.441657 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:46:13.611885432 +0000 UTC Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.474663 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.474702 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.474712 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.474728 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.474736 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.577213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.577246 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.577255 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.577269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.577277 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.679511 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.679553 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.679562 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.679576 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.679584 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.735187 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.782405 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.782451 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.782461 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.782481 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.782492 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.889297 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.889448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.890183 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.890323 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.890453 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.993754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.993795 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.993803 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.993820 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:12 crc kubenswrapper[5023]: I0219 08:01:12.993832 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:12Z","lastTransitionTime":"2026-02-19T08:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.096176 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.096208 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.096217 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.096230 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.096238 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.199012 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.199060 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.199073 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.199090 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.199102 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.269901 5023 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.301774 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.301857 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.301884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.301914 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.301935 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.404831 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.404863 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.404873 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.404888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.404899 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.442972 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:40:46.684290551 +0000 UTC Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.476523 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:13 crc kubenswrapper[5023]: E0219 08:01:13.476776 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.476549 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:13 crc kubenswrapper[5023]: E0219 08:01:13.476899 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.476545 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:13 crc kubenswrapper[5023]: E0219 08:01:13.477244 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.502740 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.507949 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.508023 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.508051 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.508080 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.508099 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.524154 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.549301 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.563546 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.584615 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.605594 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.610316 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.610408 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.610427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.610450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.610467 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.627818 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.647451 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.662325 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.675114 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.684698 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.700614 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.712986 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.713795 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.713861 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.713891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.713932 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.713958 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.725868 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.739101 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.816561 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.816630 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.816643 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.816659 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.816668 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.872144 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.885068 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.900323 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.912457 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.918731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.918772 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.918780 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.918797 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.918807 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:13Z","lastTransitionTime":"2026-02-19T08:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.927421 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.938265 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.950041 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.971420 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.984253 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:13 crc kubenswrapper[5023]: I0219 08:01:13.999578 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.014585 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.021515 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.021566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.021579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.021599 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.021611 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.027978 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.048680 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.069001 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.088058 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.123994 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.124044 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.124055 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.124073 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.124086 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.227095 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.227136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.227144 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.227160 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.227169 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.328998 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.329060 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.329077 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.329100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.329118 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.433133 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.433219 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.433236 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.433266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.433286 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.444767 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:29:46.656785441 +0000 UTC Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.538999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.539099 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.539129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.539168 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.539194 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.642689 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.642753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.642775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.642799 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.642816 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.745929 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.745997 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.746019 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.746050 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.746078 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.747941 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/0.log" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.752568 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54" exitCode=1 Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.752760 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.754569 5023 scope.go:117] "RemoveContainer" containerID="0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.793438 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.817811 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.840744 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.849527 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.849572 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.849586 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.849614 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.849654 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.858549 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.877780 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.898430 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.922843 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.942513 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.952895 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.952927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.952939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.952960 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.952973 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:14Z","lastTransitionTime":"2026-02-19T08:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.972324 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:14 crc kubenswrapper[5023]: I0219 08:01:14.988970 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:14Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.006722 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.033590 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.051775 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.056607 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.056669 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.056681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.056699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.056712 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.069848 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.159342 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.159411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.159428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.159452 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.159469 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.262152 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.262199 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.262208 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.262223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.262232 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.366142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.366215 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.366232 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.366263 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.366280 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.445231 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 02:16:48.286919252 +0000 UTC Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.469645 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.469676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.469685 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.469700 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.469709 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.476176 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.476284 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:15 crc kubenswrapper[5023]: E0219 08:01:15.476311 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.476365 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:15 crc kubenswrapper[5023]: E0219 08:01:15.476497 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:15 crc kubenswrapper[5023]: E0219 08:01:15.476595 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.572428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.572466 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.572475 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.572489 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.572501 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.675788 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.675976 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.675989 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.676008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.676018 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.758764 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/0.log" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.761850 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.762071 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.778814 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.778862 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.778871 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.778887 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.778897 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.787696 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.804859 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.822683 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.837866 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.851206 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.864815 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.876975 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.881657 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.881710 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.881726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.881753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.881770 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.892407 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.904987 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.923112 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.945202 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.963653 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.976920 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.985330 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.985373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.985386 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.985408 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.985423 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:15Z","lastTransitionTime":"2026-02-19T08:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.991790 5023 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 08:01:15 crc kubenswrapper[5023]: I0219 08:01:15.994166 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:15Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.088603 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.088734 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.088761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.088793 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.088818 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.193203 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.193278 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.193302 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.193338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.193361 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.296950 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.297017 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.297039 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.297068 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.297088 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.399830 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.399881 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.399895 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.399913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.399923 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.445896 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:52:10.653268583 +0000 UTC Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.503309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.503365 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.503384 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.503420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.503436 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.606678 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.606761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.606797 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.606832 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.606860 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.709362 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.709432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.709450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.709472 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.709485 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.767541 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/1.log" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.768770 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/0.log" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.772810 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791" exitCode=1 Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.772859 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.772907 5023 scope.go:117] "RemoveContainer" containerID="0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.773724 5023 scope.go:117] "RemoveContainer" containerID="42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791" Feb 19 08:01:16 crc kubenswrapper[5023]: E0219 08:01:16.773912 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.798407 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.812078 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.812114 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.812125 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.812145 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.812160 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.816243 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.834653 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.873878 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.898967 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.914265 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.914343 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.914365 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.914392 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.914409 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:16Z","lastTransitionTime":"2026-02-19T08:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.917485 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.941085 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.964676 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.984582 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:16 crc kubenswrapper[5023]: I0219 08:01:16.998582 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:16Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.017681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.017733 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.017742 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.017764 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.017778 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.019387 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.035317 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.054979 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.072402 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.081320 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755"] Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.081802 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.084707 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.084927 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.110151 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.120910 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.120961 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.120977 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.121001 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.121020 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.133908 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.150440 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.160011 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.160103 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh6rg\" (UniqueName: \"kubernetes.io/projected/3393ca29-8dc6-4bad-b766-357502c15ae1-kube-api-access-rh6rg\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.160172 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3393ca29-8dc6-4bad-b766-357502c15ae1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.160223 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.183429 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.207596 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.224616 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.224720 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.224738 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.224771 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.224796 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.237896 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.258907 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.262236 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3393ca29-8dc6-4bad-b766-357502c15ae1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.262354 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.262486 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.262563 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh6rg\" (UniqueName: \"kubernetes.io/projected/3393ca29-8dc6-4bad-b766-357502c15ae1-kube-api-access-rh6rg\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.264161 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.264472 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3393ca29-8dc6-4bad-b766-357502c15ae1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.273602 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3393ca29-8dc6-4bad-b766-357502c15ae1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.292370 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.300334 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh6rg\" (UniqueName: \"kubernetes.io/projected/3393ca29-8dc6-4bad-b766-357502c15ae1-kube-api-access-rh6rg\") pod \"ovnkube-control-plane-749d76644c-gl755\" (UID: \"3393ca29-8dc6-4bad-b766-357502c15ae1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.320966 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.328069 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.328124 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.328143 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.328174 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.328196 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.338522 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.360469 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.379433 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.399518 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.401721 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.424175 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: W0219 08:01:17.424803 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3393ca29_8dc6_4bad_b766_357502c15ae1.slice/crio-ba66406ac1ea69136c30abeab91bc96f0433d1ed307bef22417cb54e6a0198ce WatchSource:0}: Error finding container ba66406ac1ea69136c30abeab91bc96f0433d1ed307bef22417cb54e6a0198ce: Status 404 returned error can't find the container with id ba66406ac1ea69136c30abeab91bc96f0433d1ed307bef22417cb54e6a0198ce Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.431352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.431402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.431420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.431447 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.431467 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.441889 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:17Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.446009 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:42:35.692899557 +0000 UTC Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.476748 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.476841 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.476766 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:17 crc kubenswrapper[5023]: E0219 08:01:17.476994 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:17 crc kubenswrapper[5023]: E0219 08:01:17.477190 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:17 crc kubenswrapper[5023]: E0219 08:01:17.477369 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.534552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.534654 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.534676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.534702 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.534720 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.637754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.637810 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.637833 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.637858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.637879 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.742725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.743341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.743511 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.743671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.743813 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.783000 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/1.log" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.790900 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" event={"ID":"3393ca29-8dc6-4bad-b766-357502c15ae1","Type":"ContainerStarted","Data":"ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.790989 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" event={"ID":"3393ca29-8dc6-4bad-b766-357502c15ae1","Type":"ContainerStarted","Data":"ba66406ac1ea69136c30abeab91bc96f0433d1ed307bef22417cb54e6a0198ce"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.847732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.847778 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.847788 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.847810 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.847823 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.950688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.950739 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.950752 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.950770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:17 crc kubenswrapper[5023]: I0219 08:01:17.950782 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:17Z","lastTransitionTime":"2026-02-19T08:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.053971 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.054022 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.054036 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.054054 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.054066 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.156525 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.156571 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.156582 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.156602 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.156613 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.259900 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.259960 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.259974 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.259999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.260025 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.362518 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.362558 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.362566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.362583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.362592 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.446374 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:24:51.420158446 +0000 UTC Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.466228 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.466307 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.466325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.466352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.466364 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.569566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.569638 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.569652 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.569672 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.569684 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.621551 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bdvrm"] Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.622276 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: E0219 08:01:18.622373 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.639423 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.661101 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.674549 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.674691 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.674705 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.674737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.674751 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.680211 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.680363 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vdn\" (UniqueName: \"kubernetes.io/projected/9e27029b-2441-4434-bbd8-849e96acc2da-kube-api-access-g6vdn\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.688947 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.706828 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.736127 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.760092 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.775847 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.778237 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.778332 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.778354 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.778387 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.778406 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.781431 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.781510 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vdn\" (UniqueName: \"kubernetes.io/projected/9e27029b-2441-4434-bbd8-849e96acc2da-kube-api-access-g6vdn\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: E0219 08:01:18.781652 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:18 crc kubenswrapper[5023]: E0219 08:01:18.781726 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:19.281707469 +0000 UTC m=+36.938826427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.792227 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.798174 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" event={"ID":"3393ca29-8dc6-4bad-b766-357502c15ae1","Type":"ContainerStarted","Data":"78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.809049 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.813557 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vdn\" (UniqueName: \"kubernetes.io/projected/9e27029b-2441-4434-bbd8-849e96acc2da-kube-api-access-g6vdn\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.826756 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.843148 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.881362 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.881415 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.881428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.881453 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.881470 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.899321 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.925857 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.940599 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.954394 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.971237 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.984316 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.984483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.984589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.984696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.984849 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:18Z","lastTransitionTime":"2026-02-19T08:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.986509 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:18 crc kubenswrapper[5023]: I0219 08:01:18.998989 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:18Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.017925 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.033667 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.044537 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.070933 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.087461 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.087516 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.087529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.087552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.087568 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.088001 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.102114 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.114139 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.125667 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.146373 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.163097 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.184766 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.190353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.190419 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.190443 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.190477 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.190498 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.205859 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.223069 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.243439 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.286115 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.286251 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:01:35.286226474 +0000 UTC m=+52.943345432 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.286356 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.286407 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.286529 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.286546 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.286583 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:35.286573123 +0000 UTC m=+52.943692071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.286609 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:20.286603184 +0000 UTC m=+37.943722132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.292700 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.292736 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.292745 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.292761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.292772 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.387474 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.387532 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.387578 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387752 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387772 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387786 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387771 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387829 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:35.387815669 +0000 UTC m=+53.044934617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.387967 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:35.387918242 +0000 UTC m=+53.045037360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.388184 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.388279 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.388303 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.388429 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:01:35.388393384 +0000 UTC m=+53.045512372 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.396681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.396726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.396737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.396753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.396766 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.447430 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:33:41.424448761 +0000 UTC Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.476262 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.476411 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.476418 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.476731 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.476938 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.477189 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.500606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.500711 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.500737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.500775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.500806 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.605148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.605211 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.605224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.605245 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.605259 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.647493 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.647605 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.647867 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.647943 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.647969 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.672735 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.678812 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.678884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.678913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.678952 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.678980 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.703668 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.709411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.709463 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.709479 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.709504 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.709520 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.732874 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.738497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.738612 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.738676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.738725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.738749 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.756668 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.761785 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.761867 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.761889 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.761921 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.761946 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.781121 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:19Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:19 crc kubenswrapper[5023]: E0219 08:01:19.781374 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.783250 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.783340 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.783358 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.783387 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.783401 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.886492 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.886582 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.886606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.886689 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.886717 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.990494 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.990588 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.990607 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.990677 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:19 crc kubenswrapper[5023]: I0219 08:01:19.990698 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:19Z","lastTransitionTime":"2026-02-19T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.094574 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.094673 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.094692 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.094720 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.094740 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.199279 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.199370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.199393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.199435 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.199462 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.299601 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:20 crc kubenswrapper[5023]: E0219 08:01:20.299981 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:20 crc kubenswrapper[5023]: E0219 08:01:20.300185 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:22.300144303 +0000 UTC m=+39.957263291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.302725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.302791 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.302819 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.302852 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.302871 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.406529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.406602 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.406658 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.406689 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.406708 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.448757 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:02:25.473531825 +0000 UTC Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.476516 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:20 crc kubenswrapper[5023]: E0219 08:01:20.476780 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.510003 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.510061 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.510082 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.510106 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.510125 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.614101 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.614193 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.614212 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.614243 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.614264 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.717554 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.717685 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.717711 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.717747 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.717772 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.821342 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.821402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.821420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.821449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.821469 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.924959 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.925039 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.925059 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.925089 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:20 crc kubenswrapper[5023]: I0219 08:01:20.925110 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:20Z","lastTransitionTime":"2026-02-19T08:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.028919 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.028976 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.028989 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.029013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.029027 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.132354 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.132410 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.132426 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.132447 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.132462 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.235212 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.235297 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.235330 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.235365 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.235391 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.338328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.338417 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.338440 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.338478 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.338500 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.441569 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.441663 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.441686 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.441714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.441735 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.449492 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:52:05.929622093 +0000 UTC Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.477298 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.477340 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.477401 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:21 crc kubenswrapper[5023]: E0219 08:01:21.477528 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:21 crc kubenswrapper[5023]: E0219 08:01:21.477695 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:21 crc kubenswrapper[5023]: E0219 08:01:21.477881 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.545189 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.545295 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.545315 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.545344 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.545365 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.648864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.648932 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.648959 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.648996 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.649020 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.752474 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.752544 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.752566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.752601 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.752661 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.855993 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.856074 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.856102 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.856131 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.856152 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.959804 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.959888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.959920 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.959957 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:21 crc kubenswrapper[5023]: I0219 08:01:21.959983 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:21Z","lastTransitionTime":"2026-02-19T08:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.063310 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.063371 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.063381 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.063402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.063416 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.166986 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.167071 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.167093 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.167131 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.167155 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.270328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.270403 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.270423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.270452 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.270478 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.325516 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:22 crc kubenswrapper[5023]: E0219 08:01:22.325834 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:22 crc kubenswrapper[5023]: E0219 08:01:22.325961 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:26.325932476 +0000 UTC m=+43.983051424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.374517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.374673 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.374708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.374747 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.374770 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.449746 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:42:18.239841711 +0000 UTC Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.476212 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:22 crc kubenswrapper[5023]: E0219 08:01:22.476484 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.479010 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.479068 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.479087 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.479113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.479132 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.583368 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.583416 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.583426 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.583444 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.583457 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.687016 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.687070 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.687083 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.687103 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.687121 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.791055 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.791139 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.791159 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.791193 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.791228 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.895509 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.895604 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.895661 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.895699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.895721 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.999023 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.999111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.999137 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.999178 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:22 crc kubenswrapper[5023]: I0219 08:01:22.999211 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:22Z","lastTransitionTime":"2026-02-19T08:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.103116 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.103470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.103566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.103707 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.103814 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.207894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.207978 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.207999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.208033 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.208055 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.311833 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.311944 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.311968 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.312001 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.312025 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.416059 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.416114 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.416130 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.416157 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.416177 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.450832 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:38:37.902416342 +0000 UTC Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.476399 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.476503 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:23 crc kubenswrapper[5023]: E0219 08:01:23.476637 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:23 crc kubenswrapper[5023]: E0219 08:01:23.476846 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.477025 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:23 crc kubenswrapper[5023]: E0219 08:01:23.477379 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.503264 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.521421 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.521470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.521479 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.521513 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.521527 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.525604 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.544461 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.567250 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7202eb9a5e521e6951c9bc78757693b630a557fce89698297174f5060cbb54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:14Z\\\",\\\"message\\\":\\\"r removal\\\\nI0219 08:01:14.332081 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 08:01:14.332163 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 08:01:14.332179 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 08:01:14.332221 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 08:01:14.332426 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 08:01:14.332446 6299 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 08:01:14.332458 6299 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 08:01:14.332469 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 08:01:14.332483 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 08:01:14.333199 6299 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 08:01:14.333217 6299 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 08:01:14.333239 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 08:01:14.333278 6299 factory.go:656] Stopping watch factory\\\\nI0219 08:01:14.333305 6299 ovnkube.go:599] Stopped ovnkube\\\\nI0219 08:01:14.333330 6299 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 08:01:14.333350 6299 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 08:01:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.585731 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.604405 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.619571 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.625585 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.625703 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.625724 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.625754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.625775 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.639465 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.663190 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.687434 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.706292 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.730588 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.730661 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.730677 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.730696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.730712 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.733528 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.751202 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.773387 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.794445 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.808232 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:23Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.833161 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.833229 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.833245 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.833266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.833281 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.936471 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.936542 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.936563 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.936593 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:23 crc kubenswrapper[5023]: I0219 08:01:23.936655 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:23Z","lastTransitionTime":"2026-02-19T08:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.039958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.040021 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.040040 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.040314 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.040357 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.143355 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.143409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.143432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.143466 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.143491 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.247759 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.247858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.247885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.247924 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.247953 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.351309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.351378 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.351397 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.351425 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.351442 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.452251 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:13:24.531525598 +0000 UTC Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.454787 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.454868 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.454888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.454925 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.454948 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.476301 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:24 crc kubenswrapper[5023]: E0219 08:01:24.476551 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.558761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.558846 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.558866 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.558893 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.558911 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.662704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.662793 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.662818 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.662853 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.662873 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.766682 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.766749 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.766772 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.766803 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.766826 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.870321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.870419 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.870448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.870483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.870510 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.973838 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.973913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.973931 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.973958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:24 crc kubenswrapper[5023]: I0219 08:01:24.974010 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:24Z","lastTransitionTime":"2026-02-19T08:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.077864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.077961 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.077979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.078015 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.078037 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.182374 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.182468 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.182489 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.182522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.182542 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.286268 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.286739 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.286994 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.287200 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.287420 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.391575 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.392054 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.392268 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.392495 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.392752 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.452957 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:02:18.996542007 +0000 UTC Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.477050 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.477124 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.477050 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:25 crc kubenswrapper[5023]: E0219 08:01:25.477284 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:25 crc kubenswrapper[5023]: E0219 08:01:25.477408 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:25 crc kubenswrapper[5023]: E0219 08:01:25.477580 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.495689 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.495749 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.495768 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.495794 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.495897 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.599715 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.599780 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.599805 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.599835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.599857 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.703699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.703774 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.703794 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.703823 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.703841 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.807512 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.807597 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.807616 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.807704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.807725 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.911352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.911441 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.911463 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.911496 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:25 crc kubenswrapper[5023]: I0219 08:01:25.911524 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:25Z","lastTransitionTime":"2026-02-19T08:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.015419 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.015473 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.015485 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.015505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.015522 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.118876 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.118923 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.118931 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.118948 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.118961 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.221427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.221500 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.221512 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.221533 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.221545 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.325146 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.325216 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.325233 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.325266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.325287 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.381381 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:26 crc kubenswrapper[5023]: E0219 08:01:26.381577 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:26 crc kubenswrapper[5023]: E0219 08:01:26.381674 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:34.381655163 +0000 UTC m=+52.038774121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.428228 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.428272 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.428280 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.428294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.428304 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.454258 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:46:51.67418012 +0000 UTC Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.476446 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:26 crc kubenswrapper[5023]: E0219 08:01:26.476659 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.534743 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.534793 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.534808 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.534824 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.534834 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.639274 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.639525 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.639645 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.639725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.639787 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.742557 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.742891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.742988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.743086 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.743172 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.846286 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.846382 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.846403 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.846438 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.846459 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.950099 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.950213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.950249 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.950283 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:26 crc kubenswrapper[5023]: I0219 08:01:26.950309 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:26Z","lastTransitionTime":"2026-02-19T08:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.053946 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.054083 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.054097 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.054117 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.054130 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.157336 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.157409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.157427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.157534 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.157605 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.261592 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.261656 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.261668 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.261688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.261699 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.365527 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.365583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.365595 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.365634 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.365648 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.455362 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:48:47.654011207 +0000 UTC Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.469180 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.469252 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.469272 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.469299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.469317 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.476826 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.476890 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.476827 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:27 crc kubenswrapper[5023]: E0219 08:01:27.477009 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:27 crc kubenswrapper[5023]: E0219 08:01:27.477095 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:27 crc kubenswrapper[5023]: E0219 08:01:27.477209 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.478367 5023 scope.go:117] "RemoveContainer" containerID="42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.500005 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.528028 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.542756 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.565175 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.573344 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.573383 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.573393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.573407 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.573418 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.585885 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.600315 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.616414 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.630347 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.646475 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.667221 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.676717 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.676774 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.676794 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.676820 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.676839 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.681819 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.694271 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.708010 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.734209 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.749648 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.764525 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.782905 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.783659 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.783721 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.783744 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.783757 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.841323 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/1.log" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.845776 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.845943 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.863394 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.876172 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.886476 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.886505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.886518 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.886534 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.886546 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.903437 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.921533 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.938432 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.958017 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.974732 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.988728 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.988760 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.988769 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.988783 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.988793 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:27Z","lastTransitionTime":"2026-02-19T08:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:27 crc kubenswrapper[5023]: I0219 08:01:27.991897 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:27Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.006384 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.018025 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.038795 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.051969 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.067514 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.081733 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.091090 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.091127 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.091136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.091151 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.091160 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.093827 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.116026 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.193522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.193562 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.193570 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.193593 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.193602 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.295771 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.295852 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.295867 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.295885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.295899 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.398568 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.398609 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.398647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.398665 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.398677 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.455841 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:31:05.151339773 +0000 UTC Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.476170 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:28 crc kubenswrapper[5023]: E0219 08:01:28.476355 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.500894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.500928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.500941 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.500958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.500966 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.605023 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.605089 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.605111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.605134 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.605155 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.707529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.707605 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.707667 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.707695 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.707714 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.809926 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.809964 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.809972 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.809985 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.809995 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.849843 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/2.log" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.850524 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/1.log" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.853388 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" exitCode=1 Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.853434 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.853474 5023 scope.go:117] "RemoveContainer" containerID="42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.854362 5023 scope.go:117] "RemoveContainer" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" Feb 19 08:01:28 crc kubenswrapper[5023]: E0219 08:01:28.854591 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.882578 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.897891 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912050 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912087 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912096 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912120 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:28Z","lastTransitionTime":"2026-02-19T08:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.912300 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.937733 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.954460 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.975694 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:28 crc kubenswrapper[5023]: I0219 08:01:28.993435 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:28Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.011812 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.015683 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.016033 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.016231 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.016433 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.016788 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.030079 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.042976 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.061764 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.075652 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.088245 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.100844 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.114123 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.119901 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.120151 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.120244 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.120329 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.120441 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.127246 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:29Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.223708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.223782 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.223804 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.223836 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.223856 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.350775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.350822 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.350832 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.350847 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.350857 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.453145 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.453182 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.453192 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.453206 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.453216 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.456238 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:58:46.513957741 +0000 UTC Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.475899 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.476037 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.475911 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:29 crc kubenswrapper[5023]: E0219 08:01:29.476224 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:29 crc kubenswrapper[5023]: E0219 08:01:29.476131 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:29 crc kubenswrapper[5023]: E0219 08:01:29.476406 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.556100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.556203 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.556222 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.556338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.556355 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.659680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.659746 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.659768 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.659799 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.659822 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.761893 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.761927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.761937 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.761950 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.761961 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.858247 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/2.log" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.863939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.863990 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.863999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.864013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.864023 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.966204 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.966234 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.966266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.966280 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:29 crc kubenswrapper[5023]: I0219 08:01:29.966289 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:29Z","lastTransitionTime":"2026-02-19T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.021317 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.021355 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.021363 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.021377 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.021387 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.038407 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.043475 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.043532 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.043565 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.043589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.043652 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.062033 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.066195 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.066250 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.066267 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.066289 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.066306 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.086046 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.090319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.090365 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.090376 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.090393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.090407 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.111791 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.115761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.115799 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.115809 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.115825 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.115834 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.132513 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:30Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.132653 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.134908 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.134941 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.134950 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.134963 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.134973 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.237133 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.237170 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.237179 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.237193 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.237201 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.340323 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.340762 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.340920 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.341051 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.341168 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.443978 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.444025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.444039 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.444057 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.444071 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.457269 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:24:09.504596242 +0000 UTC Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.476200 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:30 crc kubenswrapper[5023]: E0219 08:01:30.476527 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.545825 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.545881 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.545901 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.545922 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.545935 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.648674 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.648715 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.648723 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.648738 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.648749 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.751541 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.751589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.751600 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.751643 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.751657 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.854296 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.854340 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.854348 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.854364 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.854374 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.957269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.957356 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.957373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.957395 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:30 crc kubenswrapper[5023]: I0219 08:01:30.957410 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:30Z","lastTransitionTime":"2026-02-19T08:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.059648 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.059704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.059745 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.059766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.059775 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.163197 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.163236 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.163250 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.163266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.163277 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.265780 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.265848 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.265871 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.265903 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.265926 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.369016 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.369085 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.369112 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.369138 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.369157 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.458172 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:52:24.612889828 +0000 UTC Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.471818 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.471876 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.471894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.471923 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.471951 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.476155 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.476193 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.476190 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:31 crc kubenswrapper[5023]: E0219 08:01:31.476351 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:31 crc kubenswrapper[5023]: E0219 08:01:31.476715 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:31 crc kubenswrapper[5023]: E0219 08:01:31.476939 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.575033 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.575100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.575112 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.575133 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.575147 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.677659 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.677708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.677718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.677732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.677742 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.780699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.780742 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.780753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.780770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.780783 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.883931 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.884018 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.884042 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.884110 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.884139 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.987587 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.987706 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.987727 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.987754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:31 crc kubenswrapper[5023]: I0219 08:01:31.987777 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:31Z","lastTransitionTime":"2026-02-19T08:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.090558 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.090694 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.090720 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.090757 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.090792 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.193257 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.193318 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.193330 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.193352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.193374 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.296283 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.296396 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.296416 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.296442 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.296458 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.399115 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.399219 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.399240 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.399266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.399284 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.458839 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:31:45.61135542 +0000 UTC Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.476178 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:32 crc kubenswrapper[5023]: E0219 08:01:32.476374 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.502803 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.502892 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.502917 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.502949 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.502972 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.606174 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.606258 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.606278 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.606308 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.606332 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.709933 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.709999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.710017 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.710041 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.710059 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.812787 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.812873 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.812894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.812919 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.812936 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.916043 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.916112 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.916135 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.916166 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:32 crc kubenswrapper[5023]: I0219 08:01:32.916192 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:32Z","lastTransitionTime":"2026-02-19T08:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.019111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.019199 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.019223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.019248 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.019265 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.122854 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.122916 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.122939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.122968 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.122990 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.225767 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.225839 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.225848 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.225884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.225898 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.288183 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.301424 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.303726 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.317560 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.328751 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.328780 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.328812 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.328829 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.328840 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.331249 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.343229 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.357741 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.375382 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.387341 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.398891 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.409237 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.418824 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.431679 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.431742 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.431765 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.431791 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.431809 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.432233 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.447410 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.459401 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:06:14.896857751 +0000 UTC Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.463494 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.475773 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.475939 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.476039 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:33 crc kubenswrapper[5023]: E0219 08:01:33.476127 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:33 crc kubenswrapper[5023]: E0219 08:01:33.476268 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.476383 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:33 crc kubenswrapper[5023]: E0219 08:01:33.476475 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.487659 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.497464 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.534735 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.534783 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.534792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.534806 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.534816 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.535411 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.571578 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.584203 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.595133 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.605725 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.618448 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.631462 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.636821 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.636846 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.636861 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.636875 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.636883 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.642032 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.652445 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.663993 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.675765 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.686440 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.702542 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.713118 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.726573 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.738585 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.738828 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.738890 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.738950 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.739117 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.739890 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.758097 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42036603bec974cbbe4496a6b61d513e69da507fffb9afed9d1dfdd1723af791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:16Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 08:01:15.977221 6420 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:33Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.841373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.841712 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.841725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.841743 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.841755 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.944041 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.944083 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.944098 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.944118 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:33 crc kubenswrapper[5023]: I0219 08:01:33.944131 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:33Z","lastTransitionTime":"2026-02-19T08:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.016319 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.017892 5023 scope.go:117] "RemoveContainer" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" Feb 19 08:01:34 crc kubenswrapper[5023]: E0219 08:01:34.018315 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.030333 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.045763 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.046928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.047015 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.047042 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.047078 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.047102 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.064519 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.080779 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.094230 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.108265 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.121242 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.138085 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.150142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.150214 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.150227 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.150245 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.150257 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.160795 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.179725 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.197829 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.210056 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.221378 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.240004 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252748 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252825 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252873 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252906 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.252919 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.263062 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.276970 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:34Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.355692 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.355748 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.355766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.355791 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.355808 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.400751 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:34 crc kubenswrapper[5023]: E0219 08:01:34.400911 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:34 crc kubenswrapper[5023]: E0219 08:01:34.400973 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:01:50.400955983 +0000 UTC m=+68.058074941 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.458017 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.458092 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.458110 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.458129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.458141 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.460084 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:51:15.704460189 +0000 UTC Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.476415 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:34 crc kubenswrapper[5023]: E0219 08:01:34.476540 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.559983 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.560025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.560047 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.560068 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.560084 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.662167 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.662476 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.662574 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.662714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.662819 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.765297 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.765353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.765370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.765391 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.765404 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.868240 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.868288 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.868302 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.868322 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.868336 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.970497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.970771 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.970792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.970813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:34 crc kubenswrapper[5023]: I0219 08:01:34.970828 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:34Z","lastTransitionTime":"2026-02-19T08:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.073753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.073813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.073830 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.073858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.073876 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.176357 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.176390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.176398 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.176413 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.176422 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.280183 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.280212 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.280221 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.280234 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.280243 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.308105 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.308223 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.308350 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.308394 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:02:07.308382008 +0000 UTC m=+84.965500956 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.308530 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:02:07.308524442 +0000 UTC m=+84.965643390 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.383683 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.383753 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.383775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.383802 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.383826 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.409273 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.409373 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.409428 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409574 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409684 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409721 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:02:07.409696886 +0000 UTC m=+85.066815874 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409735 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409756 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409802 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409847 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:02:07.409819439 +0000 UTC m=+85.066938427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409854 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409880 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.409972 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:02:07.409943952 +0000 UTC m=+85.067062930 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.460185 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:13:58.359524324 +0000 UTC Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.476484 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.476523 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.476663 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.476484 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.476685 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:35 crc kubenswrapper[5023]: E0219 08:01:35.477025 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.487468 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.487497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.487507 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.487517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.487525 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.590388 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.590424 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.590432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.590446 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.590456 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.693412 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.693469 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.693487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.693511 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.693528 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.796565 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.796663 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.796711 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.796737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.796753 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.900486 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.900564 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.900588 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.900662 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:35 crc kubenswrapper[5023]: I0219 08:01:35.900683 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:35Z","lastTransitionTime":"2026-02-19T08:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.003052 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.003128 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.003152 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.003181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.003205 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.105968 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.106013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.106027 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.106051 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.106066 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.208219 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.208275 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.208287 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.208308 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.208320 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.310799 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.310827 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.310835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.310848 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.310856 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.412676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.412708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.412717 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.412729 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.412741 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.461107 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:11:55.999667249 +0000 UTC Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.476116 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:36 crc kubenswrapper[5023]: E0219 08:01:36.476272 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.514801 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.514834 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.514843 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.514860 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.514870 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.617671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.617765 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.617784 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.617807 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.617851 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.722515 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.722607 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.722683 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.722715 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.722784 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.826999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.827046 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.827055 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.827072 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.827081 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.929647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.929697 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.929711 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.929732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:36 crc kubenswrapper[5023]: I0219 08:01:36.929745 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:36Z","lastTransitionTime":"2026-02-19T08:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.031952 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.031988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.032000 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.032016 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.032027 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.135764 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.135807 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.135818 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.135835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.135847 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.238707 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.238766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.238775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.238808 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.238854 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.341338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.341391 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.341414 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.341428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.341437 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.443562 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.443609 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.443661 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.443682 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.443703 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.462113 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:58:14.715415076 +0000 UTC Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.476446 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.476483 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.476492 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:37 crc kubenswrapper[5023]: E0219 08:01:37.476578 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:37 crc kubenswrapper[5023]: E0219 08:01:37.476744 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:37 crc kubenswrapper[5023]: E0219 08:01:37.476788 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.547754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.547829 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.547854 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.547888 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.547913 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.650168 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.650214 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.650231 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.650254 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.650270 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.752684 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.752712 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.752720 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.752732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.752740 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.855116 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.855144 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.855151 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.855164 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.855172 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.956902 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.956966 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.956983 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.957014 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:37 crc kubenswrapper[5023]: I0219 08:01:37.957031 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:37Z","lastTransitionTime":"2026-02-19T08:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.059234 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.059294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.059310 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.059328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.059340 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.161547 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.161583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.161592 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.161606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.161641 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.264529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.264579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.264589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.264603 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.264612 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.367325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.367390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.367427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.367454 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.367472 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.463139 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:21:36.603793318 +0000 UTC Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.470333 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.470388 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.470406 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.470428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.470445 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.476796 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:38 crc kubenswrapper[5023]: E0219 08:01:38.476924 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.573718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.573834 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.573861 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.573891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.573916 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.677074 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.677148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.677172 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.677205 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.677230 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.780616 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.780688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.780704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.780725 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.780742 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.883254 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.883305 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.883323 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.883343 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.883359 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.985352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.985390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.985401 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.985417 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:38 crc kubenswrapper[5023]: I0219 08:01:38.985428 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:38Z","lastTransitionTime":"2026-02-19T08:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.087961 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.088002 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.088011 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.088024 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.088033 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.191010 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.191066 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.191084 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.191106 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.191120 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.293972 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.294002 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.294013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.294032 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.294042 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.396418 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.396458 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.396471 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.396485 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.396496 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.464240 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:50:42.388112355 +0000 UTC Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.476702 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.476724 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:39 crc kubenswrapper[5023]: E0219 08:01:39.476870 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.476729 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:39 crc kubenswrapper[5023]: E0219 08:01:39.476959 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:39 crc kubenswrapper[5023]: E0219 08:01:39.477081 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.499507 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.499607 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.499667 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.499700 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.499725 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.602386 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.602460 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.602487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.602515 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.602536 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.704773 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.704813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.704821 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.704836 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.704847 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.807649 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.807681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.807690 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.807703 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.807714 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.909458 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.909495 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.909505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.909521 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:39 crc kubenswrapper[5023]: I0219 08:01:39.909531 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:39Z","lastTransitionTime":"2026-02-19T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.011390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.011438 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.011450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.011466 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.011476 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.114144 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.114198 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.114210 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.114230 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.114243 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.216607 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.216704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.216726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.216755 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.216771 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.319319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.319361 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.319372 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.319386 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.319395 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.326990 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.327094 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.327299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.327383 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.327454 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.349198 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.355145 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.355254 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.355281 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.355321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.355352 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.375861 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.380792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.380864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.380881 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.380912 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.380931 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.404885 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.409203 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.409340 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.409407 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.409448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.409518 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.440994 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.446543 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.446579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.446590 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.446608 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.446638 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.464088 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:40Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.464341 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.464454 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:30:32.704035473 +0000 UTC Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.468015 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.468058 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.468068 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.468086 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.468098 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.476938 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:40 crc kubenswrapper[5023]: E0219 08:01:40.477322 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.573217 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.573325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.573353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.573395 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.573422 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.677844 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.677938 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.677967 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.678071 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.678096 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.781871 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.781923 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.781935 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.781956 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.781973 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.885207 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.885287 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.885393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.885426 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.885447 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.989251 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.989294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.989306 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.989322 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:40 crc kubenswrapper[5023]: I0219 08:01:40.989332 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:40Z","lastTransitionTime":"2026-02-19T08:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.093266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.093313 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.093322 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.093343 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.093359 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.196327 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.196402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.196420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.196875 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.196935 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.300864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.301457 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.301610 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.301802 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.301977 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.405370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.405426 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.405438 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.405455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.405468 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.465573 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:10:19.217011802 +0000 UTC Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.476491 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.476521 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.476698 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:41 crc kubenswrapper[5023]: E0219 08:01:41.476897 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:41 crc kubenswrapper[5023]: E0219 08:01:41.477086 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:41 crc kubenswrapper[5023]: E0219 08:01:41.477354 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.507377 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.507446 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.507463 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.507482 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.507546 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.611378 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.611463 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.611483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.611517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.611541 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.714616 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.714687 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.714699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.714714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.714724 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.817681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.817758 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.817777 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.817855 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.817903 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.920847 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.920916 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.920936 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.920961 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:41 crc kubenswrapper[5023]: I0219 08:01:41.920984 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:41Z","lastTransitionTime":"2026-02-19T08:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.024531 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.024656 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.024681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.024715 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.024736 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.127887 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.127958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.127984 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.128018 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.128040 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.231341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.231399 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.231411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.231440 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.231451 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.334319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.334387 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.334455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.334483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.334500 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.437696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.437759 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.437770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.437790 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.437802 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.466793 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:42:09.864916107 +0000 UTC Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.476460 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:42 crc kubenswrapper[5023]: E0219 08:01:42.476784 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.540854 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.540928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.540947 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.540977 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.540998 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.643731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.643808 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.643824 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.643853 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.643877 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.746301 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.746373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.746393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.746423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.746444 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.850235 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.850292 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.850726 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.850792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.850803 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.956080 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.956138 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.956151 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.956172 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:42 crc kubenswrapper[5023]: I0219 08:01:42.956184 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:42Z","lastTransitionTime":"2026-02-19T08:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.059427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.059487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.059504 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.059529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.059545 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:43Z","lastTransitionTime":"2026-02-19T08:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.162663 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.162715 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.162729 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.162746 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.162760 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:43Z","lastTransitionTime":"2026-02-19T08:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.266449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.266517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.266537 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.266566 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:43 crc kubenswrapper[5023]: I0219 08:01:43.266589 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:43Z","lastTransitionTime":"2026-02-19T08:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.477010 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:02:16.969908997 +0000 UTC Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.477121 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:44 crc kubenswrapper[5023]: E0219 08:01:44.477251 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.477411 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:44 crc kubenswrapper[5023]: E0219 08:01:44.477468 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.477584 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:44 crc kubenswrapper[5023]: E0219 08:01:44.477661 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.481409 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:44 crc kubenswrapper[5023]: E0219 08:01:44.481555 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.489770 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.489854 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.489877 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.489909 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.489930 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:44Z","lastTransitionTime":"2026-02-19T08:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.505391 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.530073 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.551480 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.571643 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.592857 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.595402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.595598 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.595723 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.595842 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.595967 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:44Z","lastTransitionTime":"2026-02-19T08:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.609005 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.640493 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.665604 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.684685 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.703251 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.703324 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.703338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.703390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.703411 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:44Z","lastTransitionTime":"2026-02-19T08:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.705824 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.722953 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.737905 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.752008 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.773295 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.789858 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.806361 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.807183 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.807248 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.807269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.807293 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.807311 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:44Z","lastTransitionTime":"2026-02-19T08:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.819584 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:44Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.911181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.911476 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.911567 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.911720 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:44 crc kubenswrapper[5023]: I0219 08:01:44.911829 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:44Z","lastTransitionTime":"2026-02-19T08:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.014793 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.014886 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.014907 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.014938 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.014959 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.118349 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.118434 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.118455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.118493 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.118516 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.222529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.222610 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.222708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.222756 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.222785 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.326111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.326203 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.326223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.326257 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.326279 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.430111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.430194 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.430213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.430246 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.430271 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.477113 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:54:27.594319409 +0000 UTC Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.532854 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.532979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.532997 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.533032 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.533053 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.636892 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.636975 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.636991 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.637020 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.637038 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.739909 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.739990 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.740014 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.740055 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.740085 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.843474 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.843511 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.843522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.843538 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.843549 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.946831 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.947118 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.947217 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.947302 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:45 crc kubenswrapper[5023]: I0219 08:01:45.947389 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:45Z","lastTransitionTime":"2026-02-19T08:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.050498 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.050582 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.050600 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.050696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.050719 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.158041 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.158100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.158112 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.158134 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.158149 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.261010 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.261089 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.261113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.261144 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.261168 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.364560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.364615 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.364655 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.364680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.364696 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.468222 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.468281 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.468292 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.468309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.468322 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.476147 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:46 crc kubenswrapper[5023]: E0219 08:01:46.476275 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.476409 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:46 crc kubenswrapper[5023]: E0219 08:01:46.476684 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.477052 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:46 crc kubenswrapper[5023]: E0219 08:01:46.477411 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.477471 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.477524 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:57:37.841012371 +0000 UTC Feb 19 08:01:46 crc kubenswrapper[5023]: E0219 08:01:46.477972 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.571154 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.571898 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.572045 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.572148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.572273 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.678073 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.678409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.678539 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.678700 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.678830 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.782142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.782235 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.782255 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.782285 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.782306 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.884706 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.885120 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.885309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.885477 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.885660 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.989213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.989695 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.989867 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.989976 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:46 crc kubenswrapper[5023]: I0219 08:01:46.990082 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:46Z","lastTransitionTime":"2026-02-19T08:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.093885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.093939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.093950 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.093973 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.093985 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.197763 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.197840 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.197858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.197887 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.197904 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.301123 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.302285 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.302508 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.302894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.303174 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.406280 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.406356 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.406375 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.406408 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.406432 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.477455 5023 scope.go:117] "RemoveContainer" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" Feb 19 08:01:47 crc kubenswrapper[5023]: E0219 08:01:47.477795 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.478729 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:11:07.124115995 +0000 UTC Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.508744 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.508787 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.508796 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.508809 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.508819 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.612420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.612466 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.612476 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.612492 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.612505 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.715572 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.715858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.715869 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.715886 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.715898 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.819062 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.819142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.819161 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.819191 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.819210 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.922232 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.922286 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.922301 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.922323 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:47 crc kubenswrapper[5023]: I0219 08:01:47.922336 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:47Z","lastTransitionTime":"2026-02-19T08:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.024459 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.024504 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.024516 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.024572 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.024589 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.127792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.127836 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.127846 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.127864 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.127876 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.230501 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.230547 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.230560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.230575 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.230587 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.361271 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.361321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.361331 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.361349 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.361361 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.463972 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.464012 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.464025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.464040 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.464050 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.476265 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:48 crc kubenswrapper[5023]: E0219 08:01:48.476444 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.476312 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:48 crc kubenswrapper[5023]: E0219 08:01:48.476836 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.476280 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:48 crc kubenswrapper[5023]: E0219 08:01:48.477107 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.476339 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:48 crc kubenswrapper[5023]: E0219 08:01:48.477395 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.479377 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:49:46.942471775 +0000 UTC Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.567240 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.567542 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.567603 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.567694 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.567756 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.670077 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.670583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.670665 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.670729 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.670782 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.779370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.779423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.779440 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.779456 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.779465 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.882455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.882522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.882539 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.882807 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.882899 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.985525 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.985580 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.985595 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.985637 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:48 crc kubenswrapper[5023]: I0219 08:01:48.985652 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:48Z","lastTransitionTime":"2026-02-19T08:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.088681 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.088737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.088750 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.088773 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.088791 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.191873 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.191926 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.191941 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.191962 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.191977 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.294701 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.294747 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.294776 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.294797 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.294807 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.398606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.398661 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.398671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.398687 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.398698 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.480425 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:13:25.860807134 +0000 UTC Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.500994 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.501060 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.501075 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.501119 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.501131 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.604851 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.604927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.604946 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.604984 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.605022 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.708913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.708981 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.709000 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.709030 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.709048 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.812232 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.812289 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.812301 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.812322 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.812337 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.915789 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.915860 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.915872 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.915894 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:49 crc kubenswrapper[5023]: I0219 08:01:49.915912 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:49Z","lastTransitionTime":"2026-02-19T08:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.019309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.019372 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.019393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.019429 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.019448 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.123885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.123944 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.123960 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.123988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.124006 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.227198 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.227251 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.227260 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.227278 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.227290 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.330810 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.330881 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.330898 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.330926 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.330947 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.434085 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.434165 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.434187 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.434224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.434247 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.472069 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.472156 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.472177 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.472207 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.472231 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.476539 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.476566 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.476680 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.476751 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.476955 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.477167 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.477289 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.477420 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.480686 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:43:45.203495816 +0000 UTC Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.486218 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.486345 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.486422 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:02:22.486408365 +0000 UTC m=+100.143527313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.490042 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:50Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.494944 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.494983 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.494995 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.495013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.495024 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.511199 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:50Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.516135 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.516224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.516250 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.516294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.516320 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.534115 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:50Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.540272 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.540335 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.540346 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.540364 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.540383 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.552300 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:50Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.556100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.556157 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.556174 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.556198 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.556213 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.571522 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:50Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:50 crc kubenswrapper[5023]: E0219 08:01:50.571953 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.573987 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.574089 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.574153 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.574217 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.574297 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.676422 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.676461 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.676471 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.676486 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.676498 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.779526 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.780038 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.780139 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.780248 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.780314 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.883599 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.883714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.883734 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.883766 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.883786 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.988282 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.988370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.988390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.988423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:50 crc kubenswrapper[5023]: I0219 08:01:50.988447 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:50Z","lastTransitionTime":"2026-02-19T08:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.091389 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.091455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.091486 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.091512 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.091541 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.194918 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.195008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.195036 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.195073 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.195098 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.297676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.297732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.297743 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.297757 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.297767 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.400869 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.400948 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.400958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.400974 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.400984 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.480908 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:13:34.83461555 +0000 UTC Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.502899 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.502935 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.502944 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.502958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.502967 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.605291 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.605338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.605350 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.605367 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.605378 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.707769 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.707826 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.707841 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.707859 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.707872 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.809560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.809603 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.809644 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.809666 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.809678 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.911858 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.912140 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.912221 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.912297 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:51 crc kubenswrapper[5023]: I0219 08:01:51.912362 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:51Z","lastTransitionTime":"2026-02-19T08:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.014907 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.014979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.014994 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.015010 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.015021 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.117946 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.117990 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.118002 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.118018 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.118028 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.220717 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.220786 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.220804 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.220832 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.220850 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.324183 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.324269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.324290 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.324321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.324344 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.426697 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.426748 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.426761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.426778 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.426790 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.475967 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.476002 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.475972 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.476164 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:52 crc kubenswrapper[5023]: E0219 08:01:52.476291 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:52 crc kubenswrapper[5023]: E0219 08:01:52.476449 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:52 crc kubenswrapper[5023]: E0219 08:01:52.476691 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:52 crc kubenswrapper[5023]: E0219 08:01:52.476875 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.481387 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:28:01.058385801 +0000 UTC Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.510479 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/0.log" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.510610 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4610eec-5318-4742-b598-a88feb94cf7d" containerID="35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13" exitCode=1 Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.510718 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerDied","Data":"35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.511611 5023 scope.go:117] "RemoveContainer" containerID="35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.524523 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.529013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.529094 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.529113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.529142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.529161 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.534135 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.551742 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.564337 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.576477 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.593808 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.616782 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632270 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632288 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632312 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632330 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.632381 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.650332 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.666277 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.678970 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.693123 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.703707 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.715730 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.727886 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.734963 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.735000 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.735011 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.735037 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.735049 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.740850 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.751136 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:52Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.838409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.838444 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.838456 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.838472 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.838484 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.941029 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.941243 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.941267 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.941321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:52 crc kubenswrapper[5023]: I0219 08:01:52.941338 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:52Z","lastTransitionTime":"2026-02-19T08:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.043660 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.043696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.043704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.043718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.043729 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.146429 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.146475 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.146487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.146505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.146517 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.248876 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.249154 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.249223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.249297 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.249360 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.351856 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.351905 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.351918 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.351934 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.351946 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.454675 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.454895 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.454980 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.455057 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.455128 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.481480 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:00:44.883601843 +0000 UTC Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.490437 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.505656 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.514653 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/0.log" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.514712 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerStarted","Data":"89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.523051 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.534658 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.548027 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.557914 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.557968 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.557979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.557997 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.558013 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.560355 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.583768 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.597248 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.609736 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.625991 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.640667 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.655671 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.660101 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.660142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.660153 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.660169 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.660178 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.670466 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.690384 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.706151 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.721677 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.743117 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.760692 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.762901 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.763022 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.763043 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.763086 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.763112 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.774893 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.790101 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.802469 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.823471 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.838241 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.851482 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.865704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.865790 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.865814 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.865853 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.865877 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.868819 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.882882 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.898133 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.913029 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.930289 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.947335 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.960944 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.967700 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.967737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.967752 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.967775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.967790 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:53Z","lastTransitionTime":"2026-02-19T08:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.973867 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.989220 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:53 crc kubenswrapper[5023]: I0219 08:01:53.998891 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:53Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.070067 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.070437 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.070579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.070707 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.070792 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.174249 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.174560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.174661 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.174737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.174850 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.278505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.278791 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.278911 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.279011 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.279149 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.382760 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.382885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.382913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.382958 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.382988 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.476110 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.476310 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.476417 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.476600 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:54 crc kubenswrapper[5023]: E0219 08:01:54.476719 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:54 crc kubenswrapper[5023]: E0219 08:01:54.476828 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:54 crc kubenswrapper[5023]: E0219 08:01:54.476918 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:54 crc kubenswrapper[5023]: E0219 08:01:54.477019 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.481841 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:11:51.813027445 +0000 UTC Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.484722 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.484749 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.484761 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.484776 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.484787 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.492802 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.587573 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.587609 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.587635 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.587653 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.587666 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.690276 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.690368 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.690389 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.690424 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.690445 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.794580 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.794673 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.794688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.794721 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.794739 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.897547 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.897600 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.897612 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.897658 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:54 crc kubenswrapper[5023]: I0219 08:01:54.897671 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:54Z","lastTransitionTime":"2026-02-19T08:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.000416 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.000514 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.000538 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.000573 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.000600 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.103387 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.103447 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.103459 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.103510 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.103523 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.206040 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.206108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.206127 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.206179 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.206201 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.308956 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.309007 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.309018 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.309043 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.309056 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.412226 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.412286 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.412300 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.412331 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.412346 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.482885 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:50:20.528079336 +0000 UTC Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.515299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.515344 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.515358 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.515380 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.515392 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.618349 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.618409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.618430 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.618461 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.618485 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.721963 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.722013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.722024 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.722044 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.722057 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.824934 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.825009 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.825026 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.825056 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.825077 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.928481 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.928543 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.928560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.928586 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:55 crc kubenswrapper[5023]: I0219 08:01:55.928605 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:55Z","lastTransitionTime":"2026-02-19T08:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.031922 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.031967 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.031979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.031998 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.032010 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.135255 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.135306 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.135322 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.135341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.135356 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.237880 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.237925 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.237940 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.237956 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.237969 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.340158 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.340215 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.340233 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.340257 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.340274 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.443313 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.443849 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.444006 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.444149 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.444270 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.475946 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.475944 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.476015 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.477190 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:56 crc kubenswrapper[5023]: E0219 08:01:56.477327 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:56 crc kubenswrapper[5023]: E0219 08:01:56.477737 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:56 crc kubenswrapper[5023]: E0219 08:01:56.477886 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:56 crc kubenswrapper[5023]: E0219 08:01:56.477981 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.483917 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:43:04.939196317 +0000 UTC Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.547159 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.547535 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.547671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.547899 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.548023 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.652328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.652737 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.652906 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.653046 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.653206 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.756370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.756400 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.756409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.756422 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.756431 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.859583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.859890 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.860030 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.860199 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.860333 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.963454 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.963484 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.963492 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.963507 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:56 crc kubenswrapper[5023]: I0219 08:01:56.963516 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:56Z","lastTransitionTime":"2026-02-19T08:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.065562 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.065656 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.065671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.065697 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.065712 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.167698 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.167727 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.167735 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.167748 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.167756 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.270100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.270125 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.270133 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.270146 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.270154 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.372132 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.372373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.372435 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.372497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.372571 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.475559 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.475834 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.475939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.476053 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.476285 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.484736 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:07:10.62288899 +0000 UTC Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.579501 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.579974 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.580104 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.580257 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.580377 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.683348 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.683710 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.683947 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.684092 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.684427 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.787128 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.787493 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.787699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.787863 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.788007 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.890176 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.890441 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.890515 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.890579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.890672 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.992916 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.992959 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.992970 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.992988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:57 crc kubenswrapper[5023]: I0219 08:01:57.992998 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:57Z","lastTransitionTime":"2026-02-19T08:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.095768 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.095799 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.095806 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.095821 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.095829 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.198480 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.199227 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.199417 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.199563 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.199784 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.303733 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.304246 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.304406 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.304567 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.304755 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.408436 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.408877 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.408981 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.409080 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.409157 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.476643 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.477461 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.477496 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.477525 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:01:58 crc kubenswrapper[5023]: E0219 08:01:58.477531 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:01:58 crc kubenswrapper[5023]: E0219 08:01:58.478267 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:01:58 crc kubenswrapper[5023]: E0219 08:01:58.478401 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:01:58 crc kubenswrapper[5023]: E0219 08:01:58.478519 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.478845 5023 scope.go:117] "RemoveContainer" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.485647 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:18:28.115451736 +0000 UTC Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.515207 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.515253 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.515266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.515285 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.515299 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.617560 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.617598 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.617673 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.617691 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.617701 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.720373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.720433 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.720445 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.720482 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.720497 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.823150 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.823190 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.823199 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.823216 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.823231 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.925587 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.925646 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.925704 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.925718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:58 crc kubenswrapper[5023]: I0219 08:01:58.925728 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:58Z","lastTransitionTime":"2026-02-19T08:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.027798 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.027830 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.027839 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.027853 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.027861 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.130913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.130962 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.130972 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.130991 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.131001 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.233546 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.233614 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.233647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.233675 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.233694 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.335964 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.336052 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.336076 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.336106 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.336128 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.439016 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.439103 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.439127 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.439206 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.439271 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.486187 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:49:12.849053293 +0000 UTC Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.542352 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/3.log" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543221 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543282 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543304 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543335 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543356 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.543472 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/2.log" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.546536 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" exitCode=1 Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.546570 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.546603 5023 scope.go:117] "RemoveContainer" containerID="f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.548040 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:01:59 crc kubenswrapper[5023]: E0219 08:01:59.548335 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.572412 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.593471 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.609683 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.625330 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.646454 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.646508 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.646522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.646545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.646559 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.648817 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.666081 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75914e0-b4b7-4c5d-a0b0-887123ae747d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cf587f639d4701b513756716fdf96f367c2345e56577ce8ec77104b7fb0ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.683213 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.700835 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.716703 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.730539 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.745323 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.750438 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.750497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.750510 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.750536 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.750551 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.761180 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.774379 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.798818 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.813848 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.828734 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.841087 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.854113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.854210 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.854236 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.854274 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.854297 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.860162 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:59Z\\\",\\\"message\\\":\\\"us-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325184 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0219 08:01:59.325278 7031 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z]\\\\nI0219 08:01:59.325290 7031 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325284 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mrqg4\\\\nI0219 08:01:59.325297 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.957710 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.957798 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.957820 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.957850 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:01:59 crc kubenswrapper[5023]: I0219 08:01:59.957909 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:01:59Z","lastTransitionTime":"2026-02-19T08:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.061433 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.061517 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.061542 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.061580 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.061609 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.164361 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.164414 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.164424 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.164441 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.164450 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.267835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.267879 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.267891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.267911 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.267925 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.371407 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.371445 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.371456 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.371474 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.371484 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476022 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476070 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476076 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476105 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476126 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476208 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476260 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.476284 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476287 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.476380 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.476327 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.476437 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.476592 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.486360 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:00:56.922507928 +0000 UTC Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.552926 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/3.log" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.580551 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.580574 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.580583 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.580595 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.580604 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.683075 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.683124 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.683133 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.683148 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.683158 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.784935 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.785017 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.785041 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.785077 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.785100 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.878224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.878301 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.878319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.878348 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.878366 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.897644 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:00Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.903403 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.903482 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.903510 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.903546 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.903573 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.931614 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:00Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.937538 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.937615 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.937676 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.937713 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.937732 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.957201 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:00Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.961705 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.961942 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.961965 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.961988 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.962006 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:00 crc kubenswrapper[5023]: E0219 08:02:00.982023 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:00Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.988178 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.988233 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.988250 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.988279 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:00 crc kubenswrapper[5023]: I0219 08:02:00.988297 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:00Z","lastTransitionTime":"2026-02-19T08:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: E0219 08:02:01.009128 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:01Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:01 crc kubenswrapper[5023]: E0219 08:02:01.009443 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.011360 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.011420 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.011433 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.011449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.011460 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.114904 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.114962 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.114980 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.115008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.115028 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.219483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.219520 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.219531 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.219547 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.219557 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.322375 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.322467 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.322487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.322525 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.322549 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.425685 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.425742 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.425756 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.425779 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.425794 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.487303 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:07:06.287926727 +0000 UTC Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.529097 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.529181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.529195 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.529213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.529227 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.632740 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.632792 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.632805 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.632829 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.632844 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.736031 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.736076 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.736085 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.736104 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.736115 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.839181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.839216 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.839224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.839239 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.839249 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.942211 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.942253 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.942262 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.942277 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:01 crc kubenswrapper[5023]: I0219 08:02:01.942287 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:01Z","lastTransitionTime":"2026-02-19T08:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.046013 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.046076 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.046090 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.046106 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.046117 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.149705 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.149764 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.149781 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.149800 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.149813 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.253017 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.253109 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.253129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.253157 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.253175 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.356382 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.356470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.356487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.356520 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.356539 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.460223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.460307 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.460328 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.460355 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.460373 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.475871 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.475977 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.476013 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.476053 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:02 crc kubenswrapper[5023]: E0219 08:02:02.476236 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:02 crc kubenswrapper[5023]: E0219 08:02:02.476396 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:02 crc kubenswrapper[5023]: E0219 08:02:02.476580 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:02 crc kubenswrapper[5023]: E0219 08:02:02.476804 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.487888 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:54:51.082678639 +0000 UTC Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.563689 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.563743 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.563763 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.563788 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.563805 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.666223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.666299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.666318 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.666345 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.666364 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.769920 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.769989 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.770008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.770040 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.770064 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.873965 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.874024 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.874039 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.874062 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.874076 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.978844 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.978913 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.978933 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.978963 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:02 crc kubenswrapper[5023]: I0219 08:02:02.978985 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:02Z","lastTransitionTime":"2026-02-19T08:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.088976 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.089049 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.089063 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.089084 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.089114 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.193100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.193163 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.193180 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.193205 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.193225 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.298654 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.298791 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.298812 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.298842 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.298860 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.402423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.402828 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.402918 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.402999 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.403065 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.488558 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:01:33.233659298 +0000 UTC Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.494475 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.505335 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.505427 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.505440 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.505460 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.505472 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.528508 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f26c6a08b49c737627aecd3e74e2eb91a9d783150a96d1f219937fa0c3ad247f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:28Z\\\",\\\"message\\\":\\\"_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 08:01:28.566712 6636 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 08:01:28.566567 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0219 08:01:28.566717 6636 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0219 08:01:28.566720 6636 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0219 08:01:28.566542 6636 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566729 6636 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 08:01:28.566735 6636 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0219 08:01:28.566741 6636 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:59Z\\\",\\\"message\\\":\\\"us-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325184 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0219 08:01:59.325278 7031 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z]\\\\nI0219 08:01:59.325290 7031 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325284 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mrqg4\\\\nI0219 08:01:59.325297 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.546024 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.561344 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.582053 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.595749 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.607827 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.607889 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.607902 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.607919 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.607966 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.612586 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.626018 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.643338 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.659767 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.675798 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.687334 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.704483 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.710269 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.710309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.710319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.710335 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.710346 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.726036 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.737879 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75914e0-b4b7-4c5d-a0b0-887123ae747d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cf587f639d4701b513756716fdf96f367c2345e56577ce8ec77104b7fb0ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.759295 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.776298 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.791709 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:03Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.813458 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.813509 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.813522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.813548 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.813561 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.917372 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.917430 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.917448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.917476 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:03 crc kubenswrapper[5023]: I0219 08:02:03.917497 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:03Z","lastTransitionTime":"2026-02-19T08:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.016945 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.018749 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:02:04 crc kubenswrapper[5023]: E0219 08:02:04.019145 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.020173 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.020217 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.020227 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.020243 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.020255 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.032247 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.045983 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.057746 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.071196 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.083567 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75914e0-b4b7-4c5d-a0b0-887123ae747d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cf587f639d4701b513756716fdf96f367c2345e56577ce8ec77104b7fb0ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.101652 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.119083 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.123294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.123353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.123368 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.123393 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.123408 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.149853 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.161912 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.184504 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:59Z\\\",\\\"message\\\":\\\"us-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325184 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0219 08:01:59.325278 7031 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z]\\\\nI0219 08:01:59.325290 7031 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325284 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mrqg4\\\\nI0219 08:01:59.325297 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.200730 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.213116 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.225268 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.226100 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.226129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.226141 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.226160 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.226172 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.235368 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.247768 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.260244 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.270037 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.280095 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:04Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.328470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.328507 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.328518 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.328537 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.328551 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.432008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.432053 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.432070 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.432095 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.432119 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.476988 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.477035 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:04 crc kubenswrapper[5023]: E0219 08:02:04.477359 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.477092 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:04 crc kubenswrapper[5023]: E0219 08:02:04.477444 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:04 crc kubenswrapper[5023]: E0219 08:02:04.477534 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.478114 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:04 crc kubenswrapper[5023]: E0219 08:02:04.478254 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.488819 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:15:00.338061423 +0000 UTC Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.534736 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.534776 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.534788 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.534805 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.534817 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.637504 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.637552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.637561 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.637579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.637590 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.740199 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.740253 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.740267 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.740287 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.740302 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.842680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.842719 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.842730 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.842749 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.842759 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.946046 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.946119 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.946136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.946165 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:04 crc kubenswrapper[5023]: I0219 08:02:04.946187 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:04Z","lastTransitionTime":"2026-02-19T08:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.049563 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.049612 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.049647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.049671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.049689 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.153088 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.153131 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.153141 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.153156 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.153170 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.256547 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.257141 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.257157 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.257187 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.257203 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.360324 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.360376 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.360390 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.360409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.360420 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.462892 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.462933 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.462943 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.462959 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.462969 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.488961 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:56:46.502281316 +0000 UTC Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.565949 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.565993 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.566003 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.566021 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.566033 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.669469 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.669509 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.669519 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.669534 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.669543 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.771481 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.771557 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.771581 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.771615 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.771681 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.873885 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.873965 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.873985 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.874016 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.874039 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.976678 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.976709 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.976718 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.976733 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:05 crc kubenswrapper[5023]: I0219 08:02:05.976742 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:05Z","lastTransitionTime":"2026-02-19T08:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.079206 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.079239 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.079248 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.079263 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.079272 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.181436 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.181499 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.181516 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.181543 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.181562 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.284595 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.284658 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.284671 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.284692 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.284705 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.387112 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.387157 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.387169 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.387190 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.387205 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.476804 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.476835 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.476866 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:06 crc kubenswrapper[5023]: E0219 08:02:06.476941 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:06 crc kubenswrapper[5023]: E0219 08:02:06.477100 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.477148 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:06 crc kubenswrapper[5023]: E0219 08:02:06.477231 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:06 crc kubenswrapper[5023]: E0219 08:02:06.477233 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489462 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:48:16.906891426 +0000 UTC Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489762 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489802 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489815 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489835 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.489848 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.592345 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.592408 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.592425 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.592453 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.592470 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.695907 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.695979 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.695998 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.696025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.696046 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.798769 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.798846 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.798861 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.798880 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.798892 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.902727 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.902788 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.902808 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.902836 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:06 crc kubenswrapper[5023]: I0219 08:02:06.902856 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:06Z","lastTransitionTime":"2026-02-19T08:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.005444 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.005498 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.005516 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.005545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.005564 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.109480 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.109529 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.109538 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.109556 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.109566 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.212871 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.212918 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.212928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.212945 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.212956 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.316019 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.316074 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.316087 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.316290 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.316304 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.391015 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.391229 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.391298 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.39125746 +0000 UTC m=+149.048376418 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.391407 5023 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.391507 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.391480816 +0000 UTC m=+149.048599804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.419391 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.419468 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.419481 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.419499 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.419544 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.489973 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:11:16.019742093 +0000 UTC Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.492796 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.492917 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493045 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493091 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493109 5023 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493150 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493181 5023 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493208 5023 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493186 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.493164187 +0000 UTC m=+149.150283145 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493414 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.493323462 +0000 UTC m=+149.150442450 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.493507 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493601 5023 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: E0219 08:02:07.493671 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.493660251 +0000 UTC m=+149.150779209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.526395 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.526454 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.526483 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.526508 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.526527 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.629293 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.629342 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.629354 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.629372 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.629381 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.732256 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.732532 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.732606 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.732709 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.732783 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.835320 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.835385 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.835405 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.835432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.835452 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.938714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.938775 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.938794 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.938826 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:07 crc kubenswrapper[5023]: I0219 08:02:07.938845 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:07Z","lastTransitionTime":"2026-02-19T08:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.042025 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.042078 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.042097 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.042123 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.042141 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.144538 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.144570 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.144579 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.144593 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.144602 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.247598 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.247707 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.247734 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.247772 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.247812 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.351099 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.351155 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.351167 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.351187 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.351201 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.453813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.453884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.453904 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.453934 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.453953 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.476845 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.476873 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.476873 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.476886 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:08 crc kubenswrapper[5023]: E0219 08:02:08.477020 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:08 crc kubenswrapper[5023]: E0219 08:02:08.477154 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:08 crc kubenswrapper[5023]: E0219 08:02:08.477288 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:08 crc kubenswrapper[5023]: E0219 08:02:08.477349 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.491009 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:07:22.793657945 +0000 UTC Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.556714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.556765 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.556776 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.556797 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.556813 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.659828 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.659903 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.659921 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.659947 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.659966 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.763074 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.763142 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.763162 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.763203 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.763235 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.865694 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.865763 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.865782 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.865805 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.865821 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.968373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.968453 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.968475 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.968505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:08 crc kubenswrapper[5023]: I0219 08:02:08.968522 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:08Z","lastTransitionTime":"2026-02-19T08:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.072336 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.072394 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.072409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.072434 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.072448 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.175118 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.175156 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.175165 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.175184 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.175194 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.277930 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.278004 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.278027 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.278061 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.278092 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.380615 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.380684 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.380697 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.380717 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.380726 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.482983 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.483035 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.483048 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.483065 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.483076 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.491332 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:08:16.728237815 +0000 UTC Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.586224 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.586284 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.586294 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.586315 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.586326 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.688680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.688719 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.688733 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.688754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.688764 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.791648 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.791690 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.791699 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.791714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.791725 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.894353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.894399 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.894410 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.894426 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.894435 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.998102 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.998163 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.998181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.998208 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:09 crc kubenswrapper[5023]: I0219 08:02:09.998226 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:09Z","lastTransitionTime":"2026-02-19T08:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.101609 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.101667 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.101678 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.101696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.101707 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.205204 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.205305 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.205325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.205359 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.205383 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.308428 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.308467 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.308479 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.308497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.308509 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.410995 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.411063 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.411084 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.411111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.411132 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.476905 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.477069 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.477141 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:10 crc kubenswrapper[5023]: E0219 08:02:10.477153 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:10 crc kubenswrapper[5023]: E0219 08:02:10.477261 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.477346 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:10 crc kubenswrapper[5023]: E0219 08:02:10.477557 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:10 crc kubenswrapper[5023]: E0219 08:02:10.477651 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.491421 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:32:54.851693792 +0000 UTC Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.513751 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.513823 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.513848 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.513875 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.513895 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.616989 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.617075 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.617099 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.617190 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.617224 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.720997 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.721063 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.721081 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.721108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.721129 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.823367 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.823409 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.823423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.823443 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.823454 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.926302 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.926341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.926350 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.926367 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:10 crc kubenswrapper[5023]: I0219 08:02:10.926379 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:10Z","lastTransitionTime":"2026-02-19T08:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.029774 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.029865 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.029892 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.029927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.029954 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.133500 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.133580 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.133599 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.133669 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.133693 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.236582 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.236740 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.236774 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.236812 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.236838 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.339797 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.339866 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.339882 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.339904 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.339921 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.403031 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.403129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.403153 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.403184 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.403203 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.425292 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.431352 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.431423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.431460 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.431496 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.431522 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.455281 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.461023 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.461108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.461136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.461172 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.461194 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.483606 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.489033 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.489107 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.489125 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.489152 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.489172 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.492020 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:43:11.733093277 +0000 UTC Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.512306 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.524785 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.527470 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.528708 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.528800 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.528828 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.551525 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T08:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d46b7364-9350-4121-8387-6107f6e4f229\\\",\\\"systemUUID\\\":\\\"5e5c6cee-d6a5-40a2-be59-600505972de8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:11Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:11 crc kubenswrapper[5023]: E0219 08:02:11.551735 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.554242 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.554299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.554314 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.554341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.554358 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.657243 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.657319 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.657338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.657370 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.657387 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.761589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.761667 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.761680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.761702 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.761720 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.865130 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.865176 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.865187 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.865209 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.865249 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.968339 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.968392 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.968402 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.968421 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:11 crc kubenswrapper[5023]: I0219 08:02:11.968434 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:11Z","lastTransitionTime":"2026-02-19T08:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.071477 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.071533 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.071545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.071564 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.071574 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.175147 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.175193 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.175389 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.175448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.175464 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.278063 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.278111 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.278120 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.278138 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.278148 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.382265 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.382318 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.382330 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.382346 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.382364 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.476264 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.476305 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.476308 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.476264 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:12 crc kubenswrapper[5023]: E0219 08:02:12.476470 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:12 crc kubenswrapper[5023]: E0219 08:02:12.476522 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:12 crc kubenswrapper[5023]: E0219 08:02:12.476595 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:12 crc kubenswrapper[5023]: E0219 08:02:12.476689 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.485147 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.485216 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.485244 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.485275 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.485297 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.492611 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:42:53.251558921 +0000 UTC Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.588113 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.588167 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.588178 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.588194 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.588205 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.691395 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.691473 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.691497 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.691533 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.691557 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.794581 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.794655 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.794665 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.794683 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.794694 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.898015 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.898058 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.898067 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.898084 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:12 crc kubenswrapper[5023]: I0219 08:02:12.898093 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:12Z","lastTransitionTime":"2026-02-19T08:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.001683 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.001731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.001739 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.001760 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.001774 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.104291 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.104338 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.104351 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.104398 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.104411 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.207828 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.207874 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.207884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.207903 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.207914 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.309843 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.309951 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.309964 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.309980 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.309991 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.414650 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.414702 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.414713 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.414731 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.414742 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.493530 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:16:12.232083609 +0000 UTC Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.493692 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a4fc33d0c1775436bf51908fac342ca774593f2bed66099c14228a650aacfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.504182 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.514986 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3e4d325-7b2d-4177-b955-cc85093996a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46301dbad7f4828927ae125c44e2f8acbb2c5ea1921ea1b4ee99d2e4eb5572cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vxxnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-444kx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.517772 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.517795 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.517806 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.517820 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.517830 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.534094 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74jld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2403771-cd0a-411c-8666-bdeb65e9ca0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91e1f79048c4fd5439ba5b6df47bc7135201f1a38c5504930d152919b181930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab0aeb39b85b6faee3edd4ca439ce368b7914938716319f8240085e4259cad4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562de17c0fca8cfcde517b614f9252d5eef1f88fe8e8232077113494ce37152c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e893fb515c897664058e96b6a8aa508da1f03fe2d11594355192f5a3c9e212f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f2187376d1cda98247b62d1c8208cae83e87d4e3b12323e569eee265405e59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75f017a892f9b8aefb9ecd4bb75211cc1406d658635fe46d4f84a6b1566a0ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1febca4cbac61d0a91bfd2f26d852c9ffa0e982aaf5224095dc8611428127b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9nnbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74jld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.546440 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75914e0-b4b7-4c5d-a0b0-887123ae747d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cf587f639d4701b513756716fdf96f367c2345e56577ce8ec77104b7fb0ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8df98e6ae126eec5548a1291acd77f3c52505e77b23478ee107eb92b0bc943c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.559832 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9150994a-1c1a-421d-93f7-d170eba52e40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a53e902282602eb79adaeddc703b4fc2c4326d66738932c92e363e1f19bb1606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c0b58eef60724109fd46f90c45fe005322238364ca3d27f4e09f1928967689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c023448170e5a58bfd73376ea7711448d6adb2b788d14ebe13d08cc407969c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.596388 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://255e29723cf753db9afc15da7d40467d4335ee999bbe493f55857d94719a39e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b8447da5a920c3a745a4876c95c3cbbae7be97d28488dc83a8e0e086b0d0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620235 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t9v9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4610eec-5318-4742-b598-a88feb94cf7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:51Z\\\",\\\"message\\\":\\\"2026-02-19T08:01:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b\\\\n2026-02-19T08:01:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd6df30b-0989-4243-b18c-769d257f532b to /host/opt/cni/bin/\\\\n2026-02-19T08:01:06Z [verbose] multus-daemon started\\\\n2026-02-19T08:01:06Z [verbose] Readiness Indicator file check\\\\n2026-02-19T08:01:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z9mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t9v9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620321 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620360 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620398 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.620411 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.630857 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-74fm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f96bf9d-2c05-444e-9efa-2f6f0ab87de3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70fd12f0abe9c0731595d2be17e0c8ef6cd116ef9120a214e708e1dd66566fd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64bj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:09Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-74fm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.652762 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T08:01:59Z\\\",\\\"message\\\":\\\"us-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325184 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF0219 08:01:59.325278 7031 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:01:59Z is after 2025-08-24T17:21:41Z]\\\\nI0219 08:01:59.325290 7031 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-74jld\\\\nI0219 08:01:59.325284 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mrqg4\\\\nI0219 08:01:59.325297 7031 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2wtn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrqg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.666244 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddb71723-0da9-449c-9fbd-8acfc7e7da29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T08:01:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 08:00:56.980443 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 08:00:56.982569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3500031362/tls.crt::/tmp/serving-cert-3500031362/tls.key\\\\\\\"\\\\nI0219 08:01:02.953591 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 08:01:02.959069 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 08:01:02.959180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 08:01:02.959242 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 08:01:02.959278 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 08:01:02.965772 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 08:01:02.965850 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 08:01:02.965914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 08:01:02.965943 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 08:01:02.965975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 08:01:02.966004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 08:01:02.966259 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 08:01:02.967249 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.676571 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812201c-de01-4973-997c-be2725b3131d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79a15d7b0d68078109799def80afba1cec19e936fe57955ed41a993029a2ccf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4c5f69c50a35a6f0b8c639ed5b6e43c32a5a3d9fe4a5c656601451aae33f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087cc3887aaf240372d4c6f2aedeb2fc6b707fada42e0d0e2d7d3c38db9205a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:00:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e187ba9b1dd1b4f6ae77df830e4d4ba4783c58957bef3bba3f18cb7742701d91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T08:00:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T08:00:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:00:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.688759 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cabc647ed15e394b4d5eba51ceef4d606a4f23f4e9b58adf2e06886a7b553f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.696564 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbzlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46cb8e54-c22c-411b-ac49-e08f13849463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae16d5a3bd4990520272971cffdae268cbc97a49274667b89ac4661e2579f8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbzlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.706819 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.719246 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.722900 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.722937 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.722973 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.722996 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.723007 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.730820 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3393ca29-8dc6-4bad-b766-357502c15ae1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac52459526eb5c021f17f2e8942fb41058a433d04e1763946cc0eb0db83496d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78c608da6d59703f9e72fb364a3c20cc42eb1805314145c65b5ced76d443ab16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T08:01:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh6rg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gl755\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.740226 5023 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e27029b-2441-4434-bbd8-849e96acc2da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T08:01:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T08:01:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdvrm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T08:02:13Z is after 2025-08-24T17:21:41Z" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.825813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.825861 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.825872 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.825890 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.825899 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.928458 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.928498 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.928509 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.928527 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:13 crc kubenswrapper[5023]: I0219 08:02:13.928536 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:13Z","lastTransitionTime":"2026-02-19T08:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.031290 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.031329 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.031347 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.031411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.031430 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.133664 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.133706 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.133716 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.133732 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.133742 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.236576 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.236652 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.236668 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.236688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.236703 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.338847 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.338905 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.338919 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.338946 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.338960 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.441176 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.441296 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.441309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.441325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.441336 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.475832 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.475890 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.475984 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:14 crc kubenswrapper[5023]: E0219 08:02:14.476104 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.476151 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:14 crc kubenswrapper[5023]: E0219 08:02:14.476302 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:14 crc kubenswrapper[5023]: E0219 08:02:14.476339 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:14 crc kubenswrapper[5023]: E0219 08:02:14.476422 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.493611 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:54:02.443694012 +0000 UTC Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.543487 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.543518 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.543526 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.543542 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.543552 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.645534 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.645576 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.645585 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.645600 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.645611 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.748998 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.749071 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.749089 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.749117 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.749193 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.851124 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.851196 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.851218 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.851248 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.851275 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.954373 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.954423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.954432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.954448 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:14 crc kubenswrapper[5023]: I0219 08:02:14.954460 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:14Z","lastTransitionTime":"2026-02-19T08:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.057379 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.057443 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.057461 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.057486 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.057504 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.160266 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.160308 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.160317 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.160333 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.160343 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.263361 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.263398 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.263408 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.263423 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.263447 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.366098 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.366170 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.366178 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.366194 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.366204 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.469590 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.469719 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.469744 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.469781 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.469802 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.477493 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:02:15 crc kubenswrapper[5023]: E0219 08:02:15.477868 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.495084 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:02:35.728312349 +0000 UTC Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.572252 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.572288 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.572299 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.572362 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.572373 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.675477 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.675527 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.675552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.675572 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.675585 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.778520 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.778614 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.778673 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.778712 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.778729 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.882688 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.882789 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.882806 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.882832 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.882850 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.986424 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.986468 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.986485 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.986541 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:15 crc kubenswrapper[5023]: I0219 08:02:15.986555 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:15Z","lastTransitionTime":"2026-02-19T08:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.090136 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.090184 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.090200 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.090223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.090240 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.193372 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.193429 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.193449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.193480 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.193498 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.295811 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.295850 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.295859 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.295875 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.295883 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.398955 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.398991 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.399000 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.399014 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.399025 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.476094 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.476150 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.476094 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:16 crc kubenswrapper[5023]: E0219 08:02:16.476248 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.476321 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:16 crc kubenswrapper[5023]: E0219 08:02:16.476429 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:16 crc kubenswrapper[5023]: E0219 08:02:16.476462 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:16 crc kubenswrapper[5023]: E0219 08:02:16.476525 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.495364 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:39:10.115257948 +0000 UTC Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.501255 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.501298 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.501309 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.501331 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.501344 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.604048 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.604121 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.604138 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.604168 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.604188 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.706670 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.706730 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.706746 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.706763 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.706774 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.808570 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.808740 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.808781 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.808813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.808837 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.912518 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.912573 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.912589 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.912667 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:16 crc kubenswrapper[5023]: I0219 08:02:16.912703 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:16Z","lastTransitionTime":"2026-02-19T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.015022 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.015069 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.015081 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.015099 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.015110 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.118008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.118056 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.118069 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.118088 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.118101 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.220504 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.220545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.220557 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.220578 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.220592 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.322765 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.322815 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.322828 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.322847 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.322858 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.425163 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.425222 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.425237 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.425260 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.425273 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.495724 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:02:53.170699245 +0000 UTC Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.528080 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.528124 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.528134 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.528152 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.528196 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.631270 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.631341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.631359 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.631385 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.631404 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.734934 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.734978 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.734987 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.735005 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.735014 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.837949 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.837984 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.837993 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.838008 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.838018 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.941238 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.941324 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.941341 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.941368 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:17 crc kubenswrapper[5023]: I0219 08:02:17.941386 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:17Z","lastTransitionTime":"2026-02-19T08:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.044558 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.044682 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.044714 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.044747 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.044770 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.147598 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.147724 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.147755 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.147798 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.147837 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.250932 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.251001 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.251020 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.251049 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.251067 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.354600 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.354696 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.354717 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.354744 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.354761 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.457827 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.457884 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.457902 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.457928 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.457946 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.476518 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.476611 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.476518 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:18 crc kubenswrapper[5023]: E0219 08:02:18.476744 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:18 crc kubenswrapper[5023]: E0219 08:02:18.476838 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:18 crc kubenswrapper[5023]: E0219 08:02:18.476982 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.477142 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:18 crc kubenswrapper[5023]: E0219 08:02:18.477309 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.495924 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:54:57.358424347 +0000 UTC Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.559871 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.559908 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.559917 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.559932 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.559941 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.664189 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.664272 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.664292 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.664330 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.664349 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.768392 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.768464 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.768484 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.768512 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.768533 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.870877 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.870927 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.870936 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.870955 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.870964 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.974754 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.974808 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.974825 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.974855 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:18 crc kubenswrapper[5023]: I0219 08:02:18.974875 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:18Z","lastTransitionTime":"2026-02-19T08:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.078069 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.078140 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.078165 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.078194 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.078217 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.181493 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.181568 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.181586 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.181614 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.181666 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.284108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.284173 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.284196 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.284223 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.284242 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.387326 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.387374 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.387388 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.387413 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.387426 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.490108 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.490164 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.490186 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.490208 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.490225 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.496283 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:25:53.872886638 +0000 UTC Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.593375 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.593424 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.593436 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.593456 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.593469 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.696723 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.696772 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.696790 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.696813 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.696829 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.800433 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.800512 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.800548 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.800570 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.800583 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.903545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.903647 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.903672 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.903891 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:19 crc kubenswrapper[5023]: I0219 08:02:19.903925 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:19Z","lastTransitionTime":"2026-02-19T08:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.006471 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.006549 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.006577 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.006680 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.006729 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.111228 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.111318 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.111342 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.111374 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.111402 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.214869 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.214929 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.214939 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.214961 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.214979 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.317980 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.318032 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.318041 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.318058 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.318068 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.421244 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.421308 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.421325 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.421353 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.421371 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.476945 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.477037 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.476961 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:20 crc kubenswrapper[5023]: E0219 08:02:20.477157 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:20 crc kubenswrapper[5023]: E0219 08:02:20.477297 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.477038 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:20 crc kubenswrapper[5023]: E0219 08:02:20.477467 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:20 crc kubenswrapper[5023]: E0219 08:02:20.477675 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.497162 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:27:30.783673284 +0000 UTC Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.531331 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.531432 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.531458 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.531511 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.532267 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.635693 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.635777 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.635890 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.635914 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.635942 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.739666 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.739746 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.739771 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.739806 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.739828 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.843450 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.843505 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.843522 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.843545 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.843562 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.946404 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.946440 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.946449 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.946464 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:20 crc kubenswrapper[5023]: I0219 08:02:20.946472 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:20Z","lastTransitionTime":"2026-02-19T08:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.048129 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.048169 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.048181 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.048196 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.048205 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.151149 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.151189 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.151198 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.151213 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.151224 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.253307 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.253343 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.253354 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.253369 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.253378 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.355855 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.355934 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.355952 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.355976 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.355995 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.458279 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.458389 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.458407 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.458431 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.458450 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.498127 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:20:31.882775451 +0000 UTC Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.560501 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.560543 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.560552 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.560569 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.560579 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.662851 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.662920 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.662937 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.662963 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.662983 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.765411 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.765455 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.765464 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.765480 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.765490 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.864179 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.864235 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.864247 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.864273 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.864293 5023 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T08:02:21Z","lastTransitionTime":"2026-02-19T08:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.917263 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj"] Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.917942 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.919911 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.920643 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.920737 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.921713 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.955215 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5090a668-2468-4226-ab7a-77f79357892c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.955255 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.955276 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5090a668-2468-4226-ab7a-77f79357892c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.955313 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5090a668-2468-4226-ab7a-77f79357892c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.955380 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:21 crc kubenswrapper[5023]: I0219 08:02:21.980944 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gl755" podStartSLOduration=76.980912307 podStartE2EDuration="1m16.980912307s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:21.969337161 +0000 UTC m=+99.626456109" watchObservedRunningTime="2026-02-19 08:02:21.980912307 +0000 UTC m=+99.638031255" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.026938 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podStartSLOduration=78.026920485 podStartE2EDuration="1m18.026920485s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.026860114 +0000 UTC m=+99.683979062" watchObservedRunningTime="2026-02-19 08:02:22.026920485 +0000 UTC m=+99.684039433" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.050303 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-74jld" podStartSLOduration=78.050279863 podStartE2EDuration="1m18.050279863s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.045659821 +0000 UTC m=+99.702778809" watchObservedRunningTime="2026-02-19 08:02:22.050279863 +0000 UTC m=+99.707398821" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.056904 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5090a668-2468-4226-ab7a-77f79357892c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.056990 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5090a668-2468-4226-ab7a-77f79357892c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.057022 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.057059 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5090a668-2468-4226-ab7a-77f79357892c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.057081 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.057258 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.057274 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5090a668-2468-4226-ab7a-77f79357892c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.058351 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5090a668-2468-4226-ab7a-77f79357892c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.074855 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5090a668-2468-4226-ab7a-77f79357892c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.087173 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5090a668-2468-4226-ab7a-77f79357892c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hngrj\" (UID: \"5090a668-2468-4226-ab7a-77f79357892c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.094135 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.094114224 podStartE2EDuration="1m16.094114224s" podCreationTimestamp="2026-02-19 08:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.093862947 +0000 UTC m=+99.750981895" watchObservedRunningTime="2026-02-19 08:02:22.094114224 +0000 UTC m=+99.751233182" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.094375 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.094370381 podStartE2EDuration="28.094370381s" podCreationTimestamp="2026-02-19 08:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.061904241 +0000 UTC m=+99.719023199" watchObservedRunningTime="2026-02-19 08:02:22.094370381 +0000 UTC m=+99.751489339" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.122694 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t9v9m" podStartSLOduration=78.12266821 podStartE2EDuration="1m18.12266821s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.122406403 +0000 UTC m=+99.779525381" watchObservedRunningTime="2026-02-19 08:02:22.12266821 +0000 UTC m=+99.779787178" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.131265 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-74fm2" podStartSLOduration=77.131247257 podStartE2EDuration="1m17.131247257s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.130846766 +0000 UTC m=+99.787965724" watchObservedRunningTime="2026-02-19 08:02:22.131247257 +0000 UTC m=+99.788366205" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.181776 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.181761124 podStartE2EDuration="1m19.181761124s" podCreationTimestamp="2026-02-19 08:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.181433375 +0000 UTC m=+99.838552323" watchObservedRunningTime="2026-02-19 08:02:22.181761124 +0000 UTC m=+99.838880072" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.194166 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.194148952 podStartE2EDuration="49.194148952s" podCreationTimestamp="2026-02-19 08:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.193228348 +0000 UTC m=+99.850347306" watchObservedRunningTime="2026-02-19 08:02:22.194148952 +0000 UTC m=+99.851267900" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.231530 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.476575 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.476677 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.476746 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.476749 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.476780 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.476892 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.477073 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.477273 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.499171 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:58:30.32634959 +0000 UTC Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.499235 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.508366 5023 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.563449 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.563797 5023 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:02:22 crc kubenswrapper[5023]: E0219 08:02:22.563935 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs podName:9e27029b-2441-4434-bbd8-849e96acc2da nodeName:}" failed. No retries permitted until 2026-02-19 08:03:26.56390944 +0000 UTC m=+164.221028428 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs") pod "network-metrics-daemon-bdvrm" (UID: "9e27029b-2441-4434-bbd8-849e96acc2da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.644441 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" event={"ID":"5090a668-2468-4226-ab7a-77f79357892c","Type":"ContainerStarted","Data":"fed99fe18401787c49498a21eddf3554cbc9ba92c3285d7c9144b93cb9dc70ac"} Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.644498 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" event={"ID":"5090a668-2468-4226-ab7a-77f79357892c","Type":"ContainerStarted","Data":"8642d23038d787748b737e67dd79213b95df6f568c4aaf282874376c6399d46d"} Feb 19 08:02:22 crc kubenswrapper[5023]: I0219 08:02:22.657720 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zbzlq" podStartSLOduration=78.657700653 podStartE2EDuration="1m18.657700653s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.219064801 +0000 UTC m=+99.876183759" watchObservedRunningTime="2026-02-19 08:02:22.657700653 +0000 UTC m=+100.314819591" Feb 19 08:02:24 crc kubenswrapper[5023]: I0219 08:02:24.476752 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:24 crc kubenswrapper[5023]: E0219 08:02:24.477120 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:24 crc kubenswrapper[5023]: I0219 08:02:24.476812 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:24 crc kubenswrapper[5023]: I0219 08:02:24.476928 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:24 crc kubenswrapper[5023]: I0219 08:02:24.476832 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:24 crc kubenswrapper[5023]: E0219 08:02:24.477385 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:24 crc kubenswrapper[5023]: E0219 08:02:24.477762 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:24 crc kubenswrapper[5023]: E0219 08:02:24.477909 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:26 crc kubenswrapper[5023]: I0219 08:02:26.476467 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:26 crc kubenswrapper[5023]: I0219 08:02:26.476496 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:26 crc kubenswrapper[5023]: I0219 08:02:26.476589 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:26 crc kubenswrapper[5023]: I0219 08:02:26.476656 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:26 crc kubenswrapper[5023]: E0219 08:02:26.476835 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:26 crc kubenswrapper[5023]: E0219 08:02:26.476936 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:26 crc kubenswrapper[5023]: E0219 08:02:26.477058 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:26 crc kubenswrapper[5023]: E0219 08:02:26.477162 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.476082 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.476141 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.476248 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:28 crc kubenswrapper[5023]: E0219 08:02:28.476259 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.476377 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:28 crc kubenswrapper[5023]: E0219 08:02:28.476819 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:28 crc kubenswrapper[5023]: E0219 08:02:28.476971 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:28 crc kubenswrapper[5023]: E0219 08:02:28.477026 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.477694 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:02:28 crc kubenswrapper[5023]: E0219 08:02:28.477846 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrqg4_openshift-ovn-kubernetes(cd9177d9-fb83-4fdf-bc43-c8cc552e8e48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.499865 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hngrj" podStartSLOduration=84.499850021 podStartE2EDuration="1m24.499850021s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:22.658586786 +0000 UTC m=+100.315705734" watchObservedRunningTime="2026-02-19 08:02:28.499850021 +0000 UTC m=+106.156968969" Feb 19 08:02:28 crc kubenswrapper[5023]: I0219 08:02:28.500863 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 08:02:30 crc kubenswrapper[5023]: I0219 08:02:30.476517 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:30 crc kubenswrapper[5023]: I0219 08:02:30.476593 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:30 crc kubenswrapper[5023]: E0219 08:02:30.476830 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:30 crc kubenswrapper[5023]: I0219 08:02:30.476888 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:30 crc kubenswrapper[5023]: I0219 08:02:30.476936 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:30 crc kubenswrapper[5023]: E0219 08:02:30.477810 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:30 crc kubenswrapper[5023]: E0219 08:02:30.477922 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:30 crc kubenswrapper[5023]: E0219 08:02:30.478321 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:32 crc kubenswrapper[5023]: I0219 08:02:32.476868 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:32 crc kubenswrapper[5023]: I0219 08:02:32.476910 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:32 crc kubenswrapper[5023]: E0219 08:02:32.477116 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:32 crc kubenswrapper[5023]: I0219 08:02:32.477092 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:32 crc kubenswrapper[5023]: E0219 08:02:32.477404 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:32 crc kubenswrapper[5023]: E0219 08:02:32.477549 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:32 crc kubenswrapper[5023]: I0219 08:02:32.477818 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:32 crc kubenswrapper[5023]: E0219 08:02:32.477905 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:33 crc kubenswrapper[5023]: I0219 08:02:33.528003 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.527578822 podStartE2EDuration="5.527578822s" podCreationTimestamp="2026-02-19 08:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:33.526263817 +0000 UTC m=+111.183382805" watchObservedRunningTime="2026-02-19 08:02:33.527578822 +0000 UTC m=+111.184697800" Feb 19 08:02:34 crc kubenswrapper[5023]: I0219 08:02:34.476455 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:34 crc kubenswrapper[5023]: I0219 08:02:34.476589 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:34 crc kubenswrapper[5023]: I0219 08:02:34.476639 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:34 crc kubenswrapper[5023]: I0219 08:02:34.476790 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:34 crc kubenswrapper[5023]: E0219 08:02:34.476797 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:34 crc kubenswrapper[5023]: E0219 08:02:34.476920 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:34 crc kubenswrapper[5023]: E0219 08:02:34.477008 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:34 crc kubenswrapper[5023]: E0219 08:02:34.477069 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:36 crc kubenswrapper[5023]: I0219 08:02:36.476494 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:36 crc kubenswrapper[5023]: I0219 08:02:36.476794 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:36 crc kubenswrapper[5023]: I0219 08:02:36.476797 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:36 crc kubenswrapper[5023]: I0219 08:02:36.476812 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:36 crc kubenswrapper[5023]: E0219 08:02:36.477007 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:36 crc kubenswrapper[5023]: E0219 08:02:36.477106 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:36 crc kubenswrapper[5023]: E0219 08:02:36.477210 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:36 crc kubenswrapper[5023]: E0219 08:02:36.477308 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.476677 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.476754 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.476939 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:38 crc kubenswrapper[5023]: E0219 08:02:38.476947 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.477048 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:38 crc kubenswrapper[5023]: E0219 08:02:38.477203 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:38 crc kubenswrapper[5023]: E0219 08:02:38.477293 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:38 crc kubenswrapper[5023]: E0219 08:02:38.477424 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.703450 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/1.log" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.704213 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/0.log" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.704278 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4610eec-5318-4742-b598-a88feb94cf7d" containerID="89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2" exitCode=1 Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.704317 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerDied","Data":"89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2"} Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.704364 5023 scope.go:117] "RemoveContainer" containerID="35679b635ea1b925932d3c30b34be6b4ffaaf1c4385397a07d097ecd4fc1bd13" Feb 19 08:02:38 crc kubenswrapper[5023]: I0219 08:02:38.705222 5023 scope.go:117] "RemoveContainer" containerID="89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2" Feb 19 08:02:38 crc kubenswrapper[5023]: E0219 08:02:38.705698 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-t9v9m_openshift-multus(c4610eec-5318-4742-b598-a88feb94cf7d)\"" pod="openshift-multus/multus-t9v9m" podUID="c4610eec-5318-4742-b598-a88feb94cf7d" Feb 19 08:02:39 crc kubenswrapper[5023]: I0219 08:02:39.709587 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/1.log" Feb 19 08:02:40 crc kubenswrapper[5023]: I0219 08:02:40.476412 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:40 crc kubenswrapper[5023]: I0219 08:02:40.476477 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:40 crc kubenswrapper[5023]: I0219 08:02:40.476733 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:40 crc kubenswrapper[5023]: I0219 08:02:40.476761 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:40 crc kubenswrapper[5023]: E0219 08:02:40.476866 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:40 crc kubenswrapper[5023]: E0219 08:02:40.476920 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:40 crc kubenswrapper[5023]: E0219 08:02:40.477061 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:40 crc kubenswrapper[5023]: E0219 08:02:40.477192 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.480999 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:42 crc kubenswrapper[5023]: E0219 08:02:42.481194 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.481462 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:42 crc kubenswrapper[5023]: E0219 08:02:42.481516 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.481640 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:42 crc kubenswrapper[5023]: E0219 08:02:42.481686 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.482452 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.482834 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:42 crc kubenswrapper[5023]: E0219 08:02:42.482885 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.721082 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/3.log" Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.725437 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerStarted","Data":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:02:42 crc kubenswrapper[5023]: I0219 08:02:42.725932 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:02:43 crc kubenswrapper[5023]: E0219 08:02:43.496123 5023 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 08:02:43 crc kubenswrapper[5023]: I0219 08:02:43.519742 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podStartSLOduration=99.519704808 podStartE2EDuration="1m39.519704808s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:02:42.766061138 +0000 UTC m=+120.423180086" watchObservedRunningTime="2026-02-19 08:02:43.519704808 +0000 UTC m=+121.176823846" Feb 19 08:02:43 crc kubenswrapper[5023]: I0219 08:02:43.520402 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bdvrm"] Feb 19 08:02:43 crc kubenswrapper[5023]: I0219 08:02:43.520608 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:43 crc kubenswrapper[5023]: E0219 08:02:43.522099 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:44 crc kubenswrapper[5023]: I0219 08:02:44.475980 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:44 crc kubenswrapper[5023]: I0219 08:02:44.475981 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:44 crc kubenswrapper[5023]: I0219 08:02:44.476162 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:44 crc kubenswrapper[5023]: E0219 08:02:44.476266 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:44 crc kubenswrapper[5023]: E0219 08:02:44.476426 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:44 crc kubenswrapper[5023]: E0219 08:02:44.476780 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:44 crc kubenswrapper[5023]: E0219 08:02:44.494781 5023 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:02:45 crc kubenswrapper[5023]: I0219 08:02:45.475956 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:45 crc kubenswrapper[5023]: E0219 08:02:45.476651 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:46 crc kubenswrapper[5023]: I0219 08:02:46.476460 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:46 crc kubenswrapper[5023]: I0219 08:02:46.476474 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:46 crc kubenswrapper[5023]: E0219 08:02:46.476724 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:46 crc kubenswrapper[5023]: I0219 08:02:46.476475 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:46 crc kubenswrapper[5023]: E0219 08:02:46.476943 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:46 crc kubenswrapper[5023]: E0219 08:02:46.477056 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:47 crc kubenswrapper[5023]: I0219 08:02:47.476707 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:47 crc kubenswrapper[5023]: E0219 08:02:47.476981 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:48 crc kubenswrapper[5023]: I0219 08:02:48.475830 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:48 crc kubenswrapper[5023]: E0219 08:02:48.476026 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:48 crc kubenswrapper[5023]: I0219 08:02:48.476087 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:48 crc kubenswrapper[5023]: I0219 08:02:48.476185 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:48 crc kubenswrapper[5023]: E0219 08:02:48.476311 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:48 crc kubenswrapper[5023]: E0219 08:02:48.476677 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:49 crc kubenswrapper[5023]: I0219 08:02:49.476923 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:49 crc kubenswrapper[5023]: E0219 08:02:49.477219 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:49 crc kubenswrapper[5023]: I0219 08:02:49.477997 5023 scope.go:117] "RemoveContainer" containerID="89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2" Feb 19 08:02:49 crc kubenswrapper[5023]: E0219 08:02:49.496139 5023 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:02:49 crc kubenswrapper[5023]: I0219 08:02:49.760235 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/1.log" Feb 19 08:02:49 crc kubenswrapper[5023]: I0219 08:02:49.760302 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerStarted","Data":"53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400"} Feb 19 08:02:50 crc kubenswrapper[5023]: I0219 08:02:50.476289 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:50 crc kubenswrapper[5023]: I0219 08:02:50.476456 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:50 crc kubenswrapper[5023]: E0219 08:02:50.476914 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:50 crc kubenswrapper[5023]: E0219 08:02:50.476979 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:50 crc kubenswrapper[5023]: I0219 08:02:50.477186 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:50 crc kubenswrapper[5023]: E0219 08:02:50.477413 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:51 crc kubenswrapper[5023]: I0219 08:02:51.476467 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:51 crc kubenswrapper[5023]: E0219 08:02:51.477146 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:52 crc kubenswrapper[5023]: I0219 08:02:52.476464 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:52 crc kubenswrapper[5023]: I0219 08:02:52.476599 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:52 crc kubenswrapper[5023]: I0219 08:02:52.476614 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:52 crc kubenswrapper[5023]: E0219 08:02:52.476769 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:52 crc kubenswrapper[5023]: E0219 08:02:52.476941 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:52 crc kubenswrapper[5023]: E0219 08:02:52.477168 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:53 crc kubenswrapper[5023]: I0219 08:02:53.476397 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:53 crc kubenswrapper[5023]: E0219 08:02:53.477400 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdvrm" podUID="9e27029b-2441-4434-bbd8-849e96acc2da" Feb 19 08:02:54 crc kubenswrapper[5023]: I0219 08:02:54.476286 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:54 crc kubenswrapper[5023]: I0219 08:02:54.476391 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:54 crc kubenswrapper[5023]: E0219 08:02:54.476458 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 08:02:54 crc kubenswrapper[5023]: I0219 08:02:54.476527 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:54 crc kubenswrapper[5023]: E0219 08:02:54.476604 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 08:02:54 crc kubenswrapper[5023]: E0219 08:02:54.476835 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 08:02:55 crc kubenswrapper[5023]: I0219 08:02:55.476314 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:02:55 crc kubenswrapper[5023]: I0219 08:02:55.480130 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 08:02:55 crc kubenswrapper[5023]: I0219 08:02:55.481717 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.476356 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.476369 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.476521 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.480289 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.481089 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.481205 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 08:02:56 crc kubenswrapper[5023]: I0219 08:02:56.481581 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.500509 5023 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.551904 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8qx2d"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.552794 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.555521 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xsnwk"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.556585 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.558024 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.558672 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.566552 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.567530 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.568413 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.569429 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.570363 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t2bq8"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.570894 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.577446 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.578432 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.579068 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.579910 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.579982 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.580338 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.581715 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.582162 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.584024 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.584924 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.585178 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.586839 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587014 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587132 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587259 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587387 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587508 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587578 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587679 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587872 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.587914 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588005 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588073 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588145 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588233 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588257 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588361 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588466 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588516 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588669 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588704 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.588476 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.590578 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mrpgz"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.591696 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.591922 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.592065 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.593058 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.601050 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.601676 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.602153 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.602238 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.602836 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.603461 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.603987 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.604066 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.604209 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.604420 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.605295 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.606682 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.606939 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.607226 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.607729 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.608237 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.608670 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.608845 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.609218 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.609827 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.612289 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.614490 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.628608 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.628831 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.630217 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.630445 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.630752 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.632645 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.633286 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.634611 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzc2t"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.635088 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.635861 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.640302 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.640753 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bsqp5"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.641430 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.642016 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.642464 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.646126 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.647913 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.651983 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.654292 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.654845 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.655536 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.655607 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.655696 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.655838 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.655841 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.656155 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.656606 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.660710 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p865l"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.661409 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.661873 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.661947 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.662241 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.662380 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.662958 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.663716 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.664194 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.670954 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.671700 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.671877 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.672064 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.675779 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.676113 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.676197 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.676241 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.676332 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.676429 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677166 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677429 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677427 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677537 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677435 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677644 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677509 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677582 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677766 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677874 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.677985 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678069 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678114 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678155 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678235 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678253 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678350 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678371 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678373 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678477 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.678506 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.680123 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.680398 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.685377 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.685841 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.685841 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.686208 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.687126 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.694718 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.697100 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.702429 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.705527 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.705909 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.708062 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.709075 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2plfz"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711238 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-image-import-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711474 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711603 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b80f60-dcea-468b-9d71-a588df152168-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711675 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711727 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711778 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711838 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-trusted-ca\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711895 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-node-pullsecrets\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711949 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7r5\" (UniqueName: \"kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.712423 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-serving-cert\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.711841 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.712491 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.712609 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.713431 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b80f60-dcea-468b-9d71-a588df152168-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.716825 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.716933 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnrq\" (UniqueName: \"kubernetes.io/projected/78a61028-ddc3-4560-8fe7-83deff82f5d7-kube-api-access-4qnrq\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.716983 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728026 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit-dir\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728072 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728092 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728150 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728173 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djn48\" (UniqueName: \"kubernetes.io/projected/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-kube-api-access-djn48\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728196 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/78a61028-ddc3-4560-8fe7-83deff82f5d7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728217 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-serving-cert\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728235 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-config\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728252 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce979ece-fcf5-4ecb-895c-067f82b9927c-serving-cert\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728269 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-config\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728286 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728304 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-client\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728329 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-client\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728349 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728370 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46f2d3f1-2dad-40b9-aa13-78c000643917-machine-approver-tls\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728386 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf96s\" (UniqueName: \"kubernetes.io/projected/ce979ece-fcf5-4ecb-895c-067f82b9927c-kube-api-access-mf96s\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728455 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728648 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgm45\" (UniqueName: \"kubernetes.io/projected/46f2d3f1-2dad-40b9-aa13-78c000643917-kube-api-access-zgm45\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728684 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s9f\" (UniqueName: \"kubernetes.io/projected/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-kube-api-access-r7s9f\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728701 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj467\" (UniqueName: \"kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728746 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-auth-proxy-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728768 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728788 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzv5\" (UniqueName: \"kubernetes.io/projected/09d951f5-0719-4876-b71c-034c74a7e27d-kube-api-access-2jzv5\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728829 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728870 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728900 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728929 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09d951f5-0719-4876-b71c-034c74a7e27d-audit-dir\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728958 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728981 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnhqq\" (UniqueName: \"kubernetes.io/projected/1949d038-0d2f-49f5-be36-8ed7a890264c-kube-api-access-pnhqq\") pod \"downloads-7954f5f757-t2bq8\" (UID: \"1949d038-0d2f-49f5-be36-8ed7a890264c\") " pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.728999 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zml46\" (UniqueName: \"kubernetes.io/projected/42a67254-cc33-40f4-ad79-2fcfdac7871e-kube-api-access-zml46\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729029 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-audit-policies\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729053 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-encryption-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729094 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a67254-cc33-40f4-ad79-2fcfdac7871e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729112 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpfm4\" (UniqueName: \"kubernetes.io/projected/60b80f60-dcea-468b-9d71-a588df152168-kube-api-access-lpfm4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729137 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-serving-cert\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729302 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-images\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729378 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-encryption-config\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729475 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729698 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.729939 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.730950 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wcq7s"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.731880 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.732704 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.732906 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.733544 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.733645 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.735221 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.736887 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.740098 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5dl48"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.742565 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.745111 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.745150 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.745262 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.745669 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.745736 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.746129 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.746377 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.747706 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gg2zr"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.748145 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.749879 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xbwvq"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.750422 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.750608 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.750904 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.751566 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.751837 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.751981 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8qx2d"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.752089 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.753172 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xsnwk"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.754566 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.755773 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.757244 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.758331 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d85fh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.758877 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.759837 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t2bq8"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.760896 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.762090 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.763129 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzc2t"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.764723 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.765729 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.769091 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5dl48"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.774864 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.779741 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.783687 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.793299 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.794592 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.798777 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.803649 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bsqp5"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.810066 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.811476 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.813072 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.814570 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.815565 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.816169 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.818292 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.819847 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mrpgz"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.821228 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wcnx4"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.822659 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.823024 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.823907 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.826512 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2plfz"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.828290 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.829586 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830029 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830065 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830095 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09d951f5-0719-4876-b71c-034c74a7e27d-audit-dir\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830119 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-audit-policies\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830147 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830174 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnhqq\" (UniqueName: \"kubernetes.io/projected/1949d038-0d2f-49f5-be36-8ed7a890264c-kube-api-access-pnhqq\") pod \"downloads-7954f5f757-t2bq8\" (UID: \"1949d038-0d2f-49f5-be36-8ed7a890264c\") " pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830205 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zml46\" (UniqueName: \"kubernetes.io/projected/42a67254-cc33-40f4-ad79-2fcfdac7871e-kube-api-access-zml46\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830248 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-encryption-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830279 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a67254-cc33-40f4-ad79-2fcfdac7871e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830305 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpfm4\" (UniqueName: \"kubernetes.io/projected/60b80f60-dcea-468b-9d71-a588df152168-kube-api-access-lpfm4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-serving-cert\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830369 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-images\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830396 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-encryption-config\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830430 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f914dcd-a03e-4b76-beb7-abf3493fbc28-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830464 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830489 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-image-import-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830536 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f914dcd-a03e-4b76-beb7-abf3493fbc28-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830567 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79llb\" (UniqueName: \"kubernetes.io/projected/88d11d81-41f6-47db-826a-9a0d3f2d6049-kube-api-access-79llb\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830600 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830647 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b80f60-dcea-468b-9d71-a588df152168-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830674 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830701 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-trusted-ca\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830738 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-node-pullsecrets\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830764 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830819 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7r5\" (UniqueName: \"kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830846 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-serving-cert\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830875 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b80f60-dcea-468b-9d71-a588df152168-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830903 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830929 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnrq\" (UniqueName: \"kubernetes.io/projected/78a61028-ddc3-4560-8fe7-83deff82f5d7-kube-api-access-4qnrq\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830956 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.830987 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831014 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit-dir\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831059 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831087 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831114 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-config\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831142 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce979ece-fcf5-4ecb-895c-067f82b9927c-serving-cert\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831171 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djn48\" (UniqueName: \"kubernetes.io/projected/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-kube-api-access-djn48\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831199 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/78a61028-ddc3-4560-8fe7-83deff82f5d7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831226 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-serving-cert\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831252 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-client\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831281 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-config\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831307 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831333 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-client\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831358 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46f2d3f1-2dad-40b9-aa13-78c000643917-machine-approver-tls\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831381 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831403 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831416 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf96s\" (UniqueName: \"kubernetes.io/projected/ce979ece-fcf5-4ecb-895c-067f82b9927c-kube-api-access-mf96s\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831529 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831572 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88d11d81-41f6-47db-826a-9a0d3f2d6049-metrics-tls\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831716 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgm45\" (UniqueName: \"kubernetes.io/projected/46f2d3f1-2dad-40b9-aa13-78c000643917-kube-api-access-zgm45\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831750 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s9f\" (UniqueName: \"kubernetes.io/projected/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-kube-api-access-r7s9f\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831776 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj467\" (UniqueName: \"kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831803 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbt4f\" (UniqueName: \"kubernetes.io/projected/9f914dcd-a03e-4b76-beb7-abf3493fbc28-kube-api-access-sbt4f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831829 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831844 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzv5\" (UniqueName: \"kubernetes.io/projected/09d951f5-0719-4876-b71c-034c74a7e27d-kube-api-access-2jzv5\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831893 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-auth-proxy-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831924 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831931 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09d951f5-0719-4876-b71c-034c74a7e27d-audit-dir\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.831964 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.832560 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-audit-policies\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.832967 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-images\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.834015 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-trusted-ca\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.834139 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a61028-ddc3-4560-8fe7-83deff82f5d7-config\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.834993 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.835069 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.836082 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-node-pullsecrets\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.836154 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit-dir\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.836612 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.837129 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.837385 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46f2d3f1-2dad-40b9-aa13-78c000643917-auth-proxy-config\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.838695 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-audit\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.838972 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839078 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839176 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce979ece-fcf5-4ecb-895c-067f82b9927c-config\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839231 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839428 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-image-import-ca\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839559 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-serving-cert\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.839861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-encryption-config\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.840153 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wcq7s"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.840240 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.840219 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.840935 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.841091 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d951f5-0719-4876-b71c-034c74a7e27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.841215 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.841483 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.841594 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b80f60-dcea-468b-9d71-a588df152168-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.842361 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce979ece-fcf5-4ecb-895c-067f82b9927c-serving-cert\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.842411 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.843503 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/46f2d3f1-2dad-40b9-aa13-78c000643917-machine-approver-tls\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.843530 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p865l"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.843957 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-etcd-client\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.843982 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b80f60-dcea-468b-9d71-a588df152168-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.844337 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-serving-cert\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.844756 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845284 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/78a61028-ddc3-4560-8fe7-83deff82f5d7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845333 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a67254-cc33-40f4-ad79-2fcfdac7871e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845463 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845442 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845490 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845950 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-encryption-config\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.845957 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09d951f5-0719-4876-b71c-034c74a7e27d-serving-cert\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.846585 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gg2zr"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.851655 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.852933 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.854103 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d85fh"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.855898 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.856091 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-etcd-client\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.858502 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wcnx4"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.861551 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r9spm"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.863072 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.863225 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fs6pk"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.868701 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.869489 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r9spm"] Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.877949 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.895155 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.915037 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.932579 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88d11d81-41f6-47db-826a-9a0d3f2d6049-metrics-tls\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.932652 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbt4f\" (UniqueName: \"kubernetes.io/projected/9f914dcd-a03e-4b76-beb7-abf3493fbc28-kube-api-access-sbt4f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.932736 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f914dcd-a03e-4b76-beb7-abf3493fbc28-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.932757 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f914dcd-a03e-4b76-beb7-abf3493fbc28-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.932776 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79llb\" (UniqueName: \"kubernetes.io/projected/88d11d81-41f6-47db-826a-9a0d3f2d6049-kube-api-access-79llb\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.933605 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f914dcd-a03e-4b76-beb7-abf3493fbc28-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.935537 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.935860 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f914dcd-a03e-4b76-beb7-abf3493fbc28-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.955801 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.974767 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 08:03:02 crc kubenswrapper[5023]: I0219 08:03:02.995965 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.014856 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.035215 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.056674 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.088241 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.095146 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.114786 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.140513 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.155706 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.175881 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.196383 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.215440 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.235416 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.255950 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.275758 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.299696 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.316338 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.335438 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.354810 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.375353 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.416261 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.435394 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.455463 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.475869 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.495990 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.506714 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88d11d81-41f6-47db-826a-9a0d3f2d6049-metrics-tls\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.515679 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.536511 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.555543 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.576541 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.617102 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.635836 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.656742 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.676131 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.695840 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.716571 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.734977 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.754003 5023 request.go:700] Waited for 1.008373594s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.756744 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.776646 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.795445 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.816524 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.835845 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.855839 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.876961 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.895482 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.915645 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.936490 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.955102 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 08:03:03 crc kubenswrapper[5023]: I0219 08:03:03.976123 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.004097 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.015378 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.035150 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.040586 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.056439 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.076454 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.096475 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.115875 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.136720 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.156172 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.175654 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.195572 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.215157 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.234838 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.255314 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.275603 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.295467 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.315806 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.335540 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.354516 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.376014 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.395253 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.415509 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.434837 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.455030 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.475604 5023 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.515689 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf96s\" (UniqueName: \"kubernetes.io/projected/ce979ece-fcf5-4ecb-895c-067f82b9927c-kube-api-access-mf96s\") pod \"console-operator-58897d9998-8qx2d\" (UID: \"ce979ece-fcf5-4ecb-895c-067f82b9927c\") " pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.533442 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnhqq\" (UniqueName: \"kubernetes.io/projected/1949d038-0d2f-49f5-be36-8ed7a890264c-kube-api-access-pnhqq\") pod \"downloads-7954f5f757-t2bq8\" (UID: \"1949d038-0d2f-49f5-be36-8ed7a890264c\") " pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.553478 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zml46\" (UniqueName: \"kubernetes.io/projected/42a67254-cc33-40f4-ad79-2fcfdac7871e-kube-api-access-zml46\") pod \"cluster-samples-operator-665b6dd947-xsnx6\" (UID: \"42a67254-cc33-40f4-ad79-2fcfdac7871e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.581324 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djn48\" (UniqueName: \"kubernetes.io/projected/6f7c7288-0b1f-4c0c-9271-0b29ae23a3db-kube-api-access-djn48\") pod \"apiserver-76f77b778f-mrpgz\" (UID: \"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db\") " pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.606428 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgm45\" (UniqueName: \"kubernetes.io/projected/46f2d3f1-2dad-40b9-aa13-78c000643917-kube-api-access-zgm45\") pod \"machine-approver-56656f9798-8xggn\" (UID: \"46f2d3f1-2dad-40b9-aa13-78c000643917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.613525 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.620825 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.627376 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s9f\" (UniqueName: \"kubernetes.io/projected/ce633a6d-590d-49af-9daa-b1e1c2cdfbf7-kube-api-access-r7s9f\") pod \"openshift-config-operator-7777fb866f-nvtc8\" (UID: \"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.632365 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnrq\" (UniqueName: \"kubernetes.io/projected/78a61028-ddc3-4560-8fe7-83deff82f5d7-kube-api-access-4qnrq\") pod \"machine-api-operator-5694c8668f-xsnwk\" (UID: \"78a61028-ddc3-4560-8fe7-83deff82f5d7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:04 crc kubenswrapper[5023]: W0219 08:03:04.635524 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f2d3f1_2dad_40b9_aa13_78c000643917.slice/crio-be02baea11834de51df905cbd0fde36e5631e1d7f8fc4fd13bd80005b5ac9fa6 WatchSource:0}: Error finding container be02baea11834de51df905cbd0fde36e5631e1d7f8fc4fd13bd80005b5ac9fa6: Status 404 returned error can't find the container with id be02baea11834de51df905cbd0fde36e5631e1d7f8fc4fd13bd80005b5ac9fa6 Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.665122 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj467\" (UniqueName: \"kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467\") pod \"controller-manager-879f6c89f-mrmbc\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.681654 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.682532 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzv5\" (UniqueName: \"kubernetes.io/projected/09d951f5-0719-4876-b71c-034c74a7e27d-kube-api-access-2jzv5\") pod \"apiserver-7bbb656c7d-dc2n7\" (UID: \"09d951f5-0719-4876-b71c-034c74a7e27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.689654 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7r5\" (UniqueName: \"kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5\") pod \"console-f9d7485db-t88r2\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.706277 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.718566 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.725729 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpfm4\" (UniqueName: \"kubernetes.io/projected/60b80f60-dcea-468b-9d71-a588df152168-kube-api-access-lpfm4\") pod \"openshift-controller-manager-operator-756b6f6bc6-kdsm7\" (UID: \"60b80f60-dcea-468b-9d71-a588df152168\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.735526 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.736570 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.755181 5023 request.go:700] Waited for 1.891671686s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.757604 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.758639 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.776078 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.797315 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.801354 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.817546 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.819011 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.827427 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" event={"ID":"46f2d3f1-2dad-40b9-aa13-78c000643917","Type":"ContainerStarted","Data":"be02baea11834de51df905cbd0fde36e5631e1d7f8fc4fd13bd80005b5ac9fa6"} Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.847566 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.858737 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.861018 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79llb\" (UniqueName: \"kubernetes.io/projected/88d11d81-41f6-47db-826a-9a0d3f2d6049-kube-api-access-79llb\") pod \"dns-operator-744455d44c-2plfz\" (UID: \"88d11d81-41f6-47db-826a-9a0d3f2d6049\") " pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.869219 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.873847 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbt4f\" (UniqueName: \"kubernetes.io/projected/9f914dcd-a03e-4b76-beb7-abf3493fbc28-kube-api-access-sbt4f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4z6gh\" (UID: \"9f914dcd-a03e-4b76-beb7-abf3493fbc28\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.879960 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mrpgz"] Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.959703 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4088dce2-3801-4d93-be23-fd29006fd89c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.959995 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960016 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10c016d6-83ef-40e3-81f3-fff5008a34d8-trusted-ca\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960034 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-images\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960062 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960080 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960099 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-config\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960119 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6d74\" (UniqueName: \"kubernetes.io/projected/dda08cd9-0a13-4887-b853-7677fad599f8-kube-api-access-g6d74\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960137 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8l9f\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960164 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjkp\" (UniqueName: \"kubernetes.io/projected/2b400757-85ec-48a0-a962-1388812039fd-kube-api-access-btjkp\") pod \"migrator-59844c95c7-4lwsv\" (UID: \"2b400757-85ec-48a0-a962-1388812039fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960191 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960218 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-config\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960235 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f007d16-9224-42da-a0cd-86099e2846c0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960252 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960269 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9180272-479b-49b4-a59d-cf76b537331c-serving-cert\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960282 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4088dce2-3801-4d93-be23-fd29006fd89c-config\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960317 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960334 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960348 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960372 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ldp\" (UniqueName: \"kubernetes.io/projected/94da16c8-dcc7-4cd7-945f-0d6ab6220956-kube-api-access-82ldp\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960390 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960406 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960422 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960437 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960474 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960492 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4088dce2-3801-4d93-be23-fd29006fd89c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960559 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctsn\" (UniqueName: \"kubernetes.io/projected/4de168c8-11e8-4d1a-b20d-5753b288f5d6-kube-api-access-zctsn\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-webhook-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960640 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960655 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rzz4\" (UniqueName: \"kubernetes.io/projected/80a649ee-bc87-4ba9-9b01-2760d76d78cd-kube-api-access-6rzz4\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960681 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960698 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960714 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w97wh\" (UniqueName: \"kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960728 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960753 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960769 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4de168c8-11e8-4d1a-b20d-5753b288f5d6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960785 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960799 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960862 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a642adbe-beb8-43c1-aedc-d0bc9c35f049-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960886 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960903 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960927 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960943 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrbt\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-kube-api-access-rnrbt\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960959 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lfr\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-kube-api-access-k6lfr\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960984 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.960999 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-serving-cert\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961032 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6z4\" (UniqueName: \"kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961046 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda08cd9-0a13-4887-b853-7677fad599f8-tmpfs\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961061 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961093 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10c016d6-83ef-40e3-81f3-fff5008a34d8-metrics-tls\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961108 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f007d16-9224-42da-a0cd-86099e2846c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961124 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961147 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961161 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-service-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961177 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-client\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961199 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961227 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961242 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8n6\" (UniqueName: \"kubernetes.io/projected/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-kube-api-access-mk8n6\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961273 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhtv\" (UniqueName: \"kubernetes.io/projected/a9180272-479b-49b4-a59d-cf76b537331c-kube-api-access-bhhtv\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961288 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94da16c8-dcc7-4cd7-945f-0d6ab6220956-proxy-tls\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961303 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a642adbe-beb8-43c1-aedc-d0bc9c35f049-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961318 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961333 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a642adbe-beb8-43c1-aedc-d0bc9c35f049-config\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961351 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:04 crc kubenswrapper[5023]: I0219 08:03:04.961378 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:04 crc kubenswrapper[5023]: E0219 08:03:04.963118 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.463103663 +0000 UTC m=+143.120222611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.011073 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.062960 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.063138 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.56309622 +0000 UTC m=+143.220215178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064391 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4088dce2-3801-4d93-be23-fd29006fd89c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064458 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zm7\" (UniqueName: \"kubernetes.io/projected/29ffae09-b2f3-4313-a3a3-86eebe4f2794-kube-api-access-x8zm7\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064511 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064540 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10c016d6-83ef-40e3-81f3-fff5008a34d8-trusted-ca\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064572 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064597 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-images\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064640 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90253cba-9740-4814-b299-03914a8402e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064675 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljqx\" (UniqueName: \"kubernetes.io/projected/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-kube-api-access-4ljqx\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064735 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064760 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-config\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064784 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8l9f\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064804 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6d74\" (UniqueName: \"kubernetes.io/projected/dda08cd9-0a13-4887-b853-7677fad599f8-kube-api-access-g6d74\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064830 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbvq\" (UniqueName: \"kubernetes.io/projected/53d3533a-66eb-471a-84b0-90d7319fe13e-kube-api-access-bvbvq\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064854 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064875 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjkp\" (UniqueName: \"kubernetes.io/projected/2b400757-85ec-48a0-a962-1388812039fd-kube-api-access-btjkp\") pod \"migrator-59844c95c7-4lwsv\" (UID: \"2b400757-85ec-48a0-a962-1388812039fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064898 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064920 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9180272-479b-49b4-a59d-cf76b537331c-serving-cert\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064963 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-config\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.064987 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f007d16-9224-42da-a0cd-86099e2846c0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065015 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjw75\" (UniqueName: \"kubernetes.io/projected/60e8cb05-b158-4ea1-938b-0b0b55e254bb-kube-api-access-cjw75\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065037 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g48p\" (UniqueName: \"kubernetes.io/projected/f725450a-8f6d-4e4c-8526-a42157f1004b-kube-api-access-8g48p\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065057 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6lf\" (UniqueName: \"kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065079 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4088dce2-3801-4d93-be23-fd29006fd89c-config\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065103 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065128 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065149 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065170 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ldp\" (UniqueName: \"kubernetes.io/projected/94da16c8-dcc7-4cd7-945f-0d6ab6220956-kube-api-access-82ldp\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065201 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60e8cb05-b158-4ea1-938b-0b0b55e254bb-config-volume\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065224 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-stats-auth\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065245 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065271 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065296 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065320 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065346 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvwq\" (UniqueName: \"kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065376 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n6d\" (UniqueName: \"kubernetes.io/projected/ff8dff93-95cf-43ff-9206-e5a33e5d552c-kube-api-access-k5n6d\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065403 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-mountpoint-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065426 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgl8\" (UniqueName: \"kubernetes.io/projected/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-kube-api-access-fcgl8\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065455 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-csi-data-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065523 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065553 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4088dce2-3801-4d93-be23-fd29006fd89c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.065734 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-default-certificate\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.066685 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f007d16-9224-42da-a0cd-86099e2846c0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.067771 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068114 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-registration-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068165 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3df7d4af-b1dc-4065-8694-be7eeb1956e4-proxy-tls\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068193 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068225 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068254 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9pd\" (UniqueName: \"kubernetes.io/projected/7e47734c-23ac-4520-a65f-77be4ca47be8-kube-api-access-xc9pd\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068294 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctsn\" (UniqueName: \"kubernetes.io/projected/4de168c8-11e8-4d1a-b20d-5753b288f5d6-kube-api-access-zctsn\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068317 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-service-ca-bundle\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068375 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e47734c-23ac-4520-a65f-77be4ca47be8-cert\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068406 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-certs\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068433 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-webhook-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068458 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/acd541b4-fb89-444f-98c5-99a575b8b605-signing-cabundle\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068464 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068481 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-plugins-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068500 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068525 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90253cba-9740-4814-b299-03914a8402e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068551 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6zp\" (UniqueName: \"kubernetes.io/projected/686daed9-9edb-4929-b686-ed1611d57ca3-kube-api-access-hs6zp\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.068784 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.069337 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.069584 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.070166 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-images\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.070232 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-config\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.070323 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.071151 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-config\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.072145 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10c016d6-83ef-40e3-81f3-fff5008a34d8-trusted-ca\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.072466 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.572446037 +0000 UTC m=+143.229564995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.074696 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.076190 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.076441 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rzz4\" (UniqueName: \"kubernetes.io/projected/80a649ee-bc87-4ba9-9b01-2760d76d78cd-kube-api-access-6rzz4\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.076554 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90253cba-9740-4814-b299-03914a8402e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.091389 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4088dce2-3801-4d93-be23-fd29006fd89c-config\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.094776 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095654 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9180272-479b-49b4-a59d-cf76b537331c-serving-cert\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095777 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095821 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095904 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w97wh\" (UniqueName: \"kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.095977 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.096093 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb7q\" (UniqueName: \"kubernetes.io/projected/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-kube-api-access-lfb7q\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.096123 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6h9\" (UniqueName: \"kubernetes.io/projected/acd541b4-fb89-444f-98c5-99a575b8b605-kube-api-access-xd6h9\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.096156 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-srv-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.096947 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097281 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097435 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097526 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097678 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097726 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097734 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.097935 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4de168c8-11e8-4d1a-b20d-5753b288f5d6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.098012 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ffae09-b2f3-4313-a3a3-86eebe4f2794-config\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.098121 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9180272-479b-49b4-a59d-cf76b537331c-service-ca-bundle\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.098192 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.098204 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.098642 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.099327 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.100841 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-srv-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.100913 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a642adbe-beb8-43c1-aedc-d0bc9c35f049-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.100978 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101016 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/686daed9-9edb-4929-b686-ed1611d57ca3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101077 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101130 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrbt\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-kube-api-access-rnrbt\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101154 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101179 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/60e8cb05-b158-4ea1-938b-0b0b55e254bb-metrics-tls\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101204 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101228 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-serving-cert\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101262 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lfr\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-kube-api-access-k6lfr\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101291 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6z4\" (UniqueName: \"kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101315 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101340 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/acd541b4-fb89-444f-98c5-99a575b8b605-signing-key\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101367 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-node-bootstrap-token\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101389 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda08cd9-0a13-4887-b853-7677fad599f8-tmpfs\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101419 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10c016d6-83ef-40e3-81f3-fff5008a34d8-metrics-tls\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101437 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f007d16-9224-42da-a0cd-86099e2846c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101478 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101500 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wht\" (UniqueName: \"kubernetes.io/projected/3df7d4af-b1dc-4065-8694-be7eeb1956e4-kube-api-access-r8wht\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101524 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101544 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101564 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-service-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101584 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-client\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-metrics-certs\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101648 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101678 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101721 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8n6\" (UniqueName: \"kubernetes.io/projected/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-kube-api-access-mk8n6\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101759 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhtv\" (UniqueName: \"kubernetes.io/projected/a9180272-479b-49b4-a59d-cf76b537331c-kube-api-access-bhhtv\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101780 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94da16c8-dcc7-4cd7-945f-0d6ab6220956-proxy-tls\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101802 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-socket-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101824 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3df7d4af-b1dc-4065-8694-be7eeb1956e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101853 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a642adbe-beb8-43c1-aedc-d0bc9c35f049-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101873 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101890 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ffae09-b2f3-4313-a3a3-86eebe4f2794-serving-cert\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101915 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a642adbe-beb8-43c1-aedc-d0bc9c35f049-config\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101935 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.101959 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.102026 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4de168c8-11e8-4d1a-b20d-5753b288f5d6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.102784 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.103171 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.103250 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-webhook-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.104059 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.105950 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.106228 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4088dce2-3801-4d93-be23-fd29006fd89c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.107832 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.108265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a642adbe-beb8-43c1-aedc-d0bc9c35f049-config\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.108756 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dda08cd9-0a13-4887-b853-7677fad599f8-tmpfs\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.109897 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.110519 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94da16c8-dcc7-4cd7-945f-0d6ab6220956-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.110745 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-service-ca\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.111092 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-etcd-client\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.112264 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94da16c8-dcc7-4cd7-945f-0d6ab6220956-proxy-tls\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.115518 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.116038 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a642adbe-beb8-43c1-aedc-d0bc9c35f049-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.116356 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a649ee-bc87-4ba9-9b01-2760d76d78cd-serving-cert\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.117874 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.119553 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.121431 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.122652 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f007d16-9224-42da-a0cd-86099e2846c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.123917 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10c016d6-83ef-40e3-81f3-fff5008a34d8-metrics-tls\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.126429 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.128701 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dda08cd9-0a13-4887-b853-7677fad599f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.129132 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4088dce2-3801-4d93-be23-fd29006fd89c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fmnng\" (UID: \"4088dce2-3801-4d93-be23-fd29006fd89c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.132712 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.141073 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ldp\" (UniqueName: \"kubernetes.io/projected/94da16c8-dcc7-4cd7-945f-0d6ab6220956-kube-api-access-82ldp\") pod \"machine-config-operator-74547568cd-4r8mr\" (UID: \"94da16c8-dcc7-4cd7-945f-0d6ab6220956\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.153940 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8l9f\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.174752 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6d74\" (UniqueName: \"kubernetes.io/projected/dda08cd9-0a13-4887-b853-7677fad599f8-kube-api-access-g6d74\") pod \"packageserver-d55dfcdfc-dj68g\" (UID: \"dda08cd9-0a13-4887-b853-7677fad599f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.181774 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8qx2d"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.198955 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xsnwk"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.204095 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.204360 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-srv-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.204385 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/686daed9-9edb-4929-b686-ed1611d57ca3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.204427 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/60e8cb05-b158-4ea1-938b-0b0b55e254bb-metrics-tls\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.204811 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.70476653 +0000 UTC m=+143.361885478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205333 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/acd541b4-fb89-444f-98c5-99a575b8b605-signing-key\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205360 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-node-bootstrap-token\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205385 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wht\" (UniqueName: \"kubernetes.io/projected/3df7d4af-b1dc-4065-8694-be7eeb1956e4-kube-api-access-r8wht\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205409 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-metrics-certs\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205426 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205468 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-socket-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205485 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3df7d4af-b1dc-4065-8694-be7eeb1956e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205515 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ffae09-b2f3-4313-a3a3-86eebe4f2794-serving-cert\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205533 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8zm7\" (UniqueName: \"kubernetes.io/projected/29ffae09-b2f3-4313-a3a3-86eebe4f2794-kube-api-access-x8zm7\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205557 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90253cba-9740-4814-b299-03914a8402e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205574 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ljqx\" (UniqueName: \"kubernetes.io/projected/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-kube-api-access-4ljqx\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205591 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvbvq\" (UniqueName: \"kubernetes.io/projected/53d3533a-66eb-471a-84b0-90d7319fe13e-kube-api-access-bvbvq\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.205612 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206150 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjw75\" (UniqueName: \"kubernetes.io/projected/60e8cb05-b158-4ea1-938b-0b0b55e254bb-kube-api-access-cjw75\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206169 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g48p\" (UniqueName: \"kubernetes.io/projected/f725450a-8f6d-4e4c-8526-a42157f1004b-kube-api-access-8g48p\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206191 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6lf\" (UniqueName: \"kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206235 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60e8cb05-b158-4ea1-938b-0b0b55e254bb-config-volume\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206251 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-stats-auth\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206270 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmvwq\" (UniqueName: \"kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206287 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n6d\" (UniqueName: \"kubernetes.io/projected/ff8dff93-95cf-43ff-9206-e5a33e5d552c-kube-api-access-k5n6d\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206303 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-mountpoint-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206320 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgl8\" (UniqueName: \"kubernetes.io/projected/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-kube-api-access-fcgl8\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206337 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-csi-data-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206361 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206378 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-default-certificate\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206394 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-registration-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3df7d4af-b1dc-4065-8694-be7eeb1956e4-proxy-tls\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206426 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206444 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206462 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9pd\" (UniqueName: \"kubernetes.io/projected/7e47734c-23ac-4520-a65f-77be4ca47be8-kube-api-access-xc9pd\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206483 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-service-ca-bundle\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206502 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e47734c-23ac-4520-a65f-77be4ca47be8-cert\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206519 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-certs\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206535 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/acd541b4-fb89-444f-98c5-99a575b8b605-signing-cabundle\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206549 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-plugins-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.206566 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.207556 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/686daed9-9edb-4929-b686-ed1611d57ca3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208025 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90253cba-9740-4814-b299-03914a8402e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208116 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs6zp\" (UniqueName: \"kubernetes.io/projected/686daed9-9edb-4929-b686-ed1611d57ca3-kube-api-access-hs6zp\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208193 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90253cba-9740-4814-b299-03914a8402e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208399 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208605 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208763 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/60e8cb05-b158-4ea1-938b-0b0b55e254bb-metrics-tls\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.208774 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfb7q\" (UniqueName: \"kubernetes.io/projected/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-kube-api-access-lfb7q\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.209293 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-srv-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.209476 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6h9\" (UniqueName: \"kubernetes.io/projected/acd541b4-fb89-444f-98c5-99a575b8b605-kube-api-access-xd6h9\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.209483 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-registration-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.209499 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-srv-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.209532 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ffae09-b2f3-4313-a3a3-86eebe4f2794-config\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.210052 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90253cba-9740-4814-b299-03914a8402e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.210064 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-stats-auth\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.210677 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.211689 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/acd541b4-fb89-444f-98c5-99a575b8b605-signing-cabundle\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.213989 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3df7d4af-b1dc-4065-8694-be7eeb1956e4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.214279 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-service-ca-bundle\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.214520 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.714496508 +0000 UTC m=+143.371615526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.214639 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-csi-data-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.214732 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-mountpoint-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.215925 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-socket-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.216091 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-plugins-dir\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.216417 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60e8cb05-b158-4ea1-938b-0b0b55e254bb-config-volume\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.216602 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.219601 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e47734c-23ac-4520-a65f-77be4ca47be8-cert\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.220086 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-node-bootstrap-token\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.220411 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.220907 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.221212 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.221996 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90253cba-9740-4814-b299-03914a8402e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.222007 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8dff93-95cf-43ff-9206-e5a33e5d552c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.226332 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjkp\" (UniqueName: \"kubernetes.io/projected/2b400757-85ec-48a0-a962-1388812039fd-kube-api-access-btjkp\") pod \"migrator-59844c95c7-4lwsv\" (UID: \"2b400757-85ec-48a0-a962-1388812039fd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.227259 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ffae09-b2f3-4313-a3a3-86eebe4f2794-config\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.230852 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3df7d4af-b1dc-4065-8694-be7eeb1956e4-proxy-tls\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.232068 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ffae09-b2f3-4313-a3a3-86eebe4f2794-serving-cert\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.232390 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/acd541b4-fb89-444f-98c5-99a575b8b605-signing-key\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.232413 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/53d3533a-66eb-471a-84b0-90d7319fe13e-certs\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.232503 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.236182 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f725450a-8f6d-4e4c-8526-a42157f1004b-srv-cert\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.241237 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-default-certificate\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.257051 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-metrics-certs\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.265198 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rzz4\" (UniqueName: \"kubernetes.io/projected/80a649ee-bc87-4ba9-9b01-2760d76d78cd-kube-api-access-6rzz4\") pod \"etcd-operator-b45778765-bsqp5\" (UID: \"80a649ee-bc87-4ba9-9b01-2760d76d78cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.265861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctsn\" (UniqueName: \"kubernetes.io/projected/4de168c8-11e8-4d1a-b20d-5753b288f5d6-kube-api-access-zctsn\") pod \"package-server-manager-789f6589d5-bs4qh\" (UID: \"4de168c8-11e8-4d1a-b20d-5753b288f5d6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.271347 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.279389 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w97wh\" (UniqueName: \"kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh\") pod \"oauth-openshift-558db77b4-xn8fp\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.284578 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.298613 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.304306 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.312353 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.312887 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.812858621 +0000 UTC m=+143.469977569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.312981 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.313487 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.813478318 +0000 UTC m=+143.470597266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.316011 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a642adbe-beb8-43c1-aedc-d0bc9c35f049-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fxtnp\" (UID: \"a642adbe-beb8-43c1-aedc-d0bc9c35f049\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.321960 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrbt\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-kube-api-access-rnrbt\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.332477 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.334048 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.334782 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.350293 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.354773 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.357665 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6z4\" (UniqueName: \"kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4\") pod \"route-controller-manager-6576b87f9c-66ljm\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.363219 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.373686 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.375934 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.376200 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.377263 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lfr\" (UniqueName: \"kubernetes.io/projected/9f007d16-9224-42da-a0cd-86099e2846c0-kube-api-access-k6lfr\") pod \"cluster-image-registry-operator-dc59b4c8b-bb2xv\" (UID: \"9f007d16-9224-42da-a0cd-86099e2846c0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.385856 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.399435 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8n6\" (UniqueName: \"kubernetes.io/projected/f9faa084-e9ab-434b-a79a-47f6bc2bc55a-kube-api-access-mk8n6\") pod \"openshift-apiserver-operator-796bbdcf4f-lhf48\" (UID: \"f9faa084-e9ab-434b-a79a-47f6bc2bc55a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.414183 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.415863 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:05.915837667 +0000 UTC m=+143.572956615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.420273 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhtv\" (UniqueName: \"kubernetes.io/projected/a9180272-479b-49b4-a59d-cf76b537331c-kube-api-access-bhhtv\") pod \"authentication-operator-69f744f599-fzc2t\" (UID: \"a9180272-479b-49b4-a59d-cf76b537331c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.435857 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10c016d6-83ef-40e3-81f3-fff5008a34d8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p865l\" (UID: \"10c016d6-83ef-40e3-81f3-fff5008a34d8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.447512 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.448810 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2plfz"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.470382 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.479420 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8zm7\" (UniqueName: \"kubernetes.io/projected/29ffae09-b2f3-4313-a3a3-86eebe4f2794-kube-api-access-x8zm7\") pod \"service-ca-operator-777779d784-5dl48\" (UID: \"29ffae09-b2f3-4313-a3a3-86eebe4f2794\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.495610 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvbvq\" (UniqueName: \"kubernetes.io/projected/53d3533a-66eb-471a-84b0-90d7319fe13e-kube-api-access-bvbvq\") pod \"machine-config-server-fs6pk\" (UID: \"53d3533a-66eb-471a-84b0-90d7319fe13e\") " pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.511764 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs6zp\" (UniqueName: \"kubernetes.io/projected/686daed9-9edb-4929-b686-ed1611d57ca3-kube-api-access-hs6zp\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpsw6\" (UID: \"686daed9-9edb-4929-b686-ed1611d57ca3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.517076 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.517763 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.017747735 +0000 UTC m=+143.674866683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.528335 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t2bq8"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.528681 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.537340 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g48p\" (UniqueName: \"kubernetes.io/projected/f725450a-8f6d-4e4c-8526-a42157f1004b-kube-api-access-8g48p\") pod \"olm-operator-6b444d44fb-xnlzc\" (UID: \"f725450a-8f6d-4e4c-8526-a42157f1004b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.541902 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.549486 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjw75\" (UniqueName: \"kubernetes.io/projected/60e8cb05-b158-4ea1-938b-0b0b55e254bb-kube-api-access-cjw75\") pod \"dns-default-r9spm\" (UID: \"60e8cb05-b158-4ea1-938b-0b0b55e254bb\") " pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.574997 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfb7q\" (UniqueName: \"kubernetes.io/projected/fbde3d31-de3b-4e70-b558-a1e4a4326cfe-kube-api-access-lfb7q\") pod \"router-default-5444994796-xbwvq\" (UID: \"fbde3d31-de3b-4e70-b558-a1e4a4326cfe\") " pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.577196 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.590056 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.602863 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90253cba-9740-4814-b299-03914a8402e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j9bmc\" (UID: \"90253cba-9740-4814-b299-03914a8402e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.615325 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wht\" (UniqueName: \"kubernetes.io/projected/3df7d4af-b1dc-4065-8694-be7eeb1956e4-kube-api-access-r8wht\") pod \"machine-config-controller-84d6567774-hh9kd\" (UID: \"3df7d4af-b1dc-4065-8694-be7eeb1956e4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.626571 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.627240 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.627526 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.127506781 +0000 UTC m=+143.784625729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.631164 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmvwq\" (UniqueName: \"kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq\") pod \"marketplace-operator-79b997595-xxg6k\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.670606 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgl8\" (UniqueName: \"kubernetes.io/projected/3e96a31b-2ea2-4c88-9454-e44ba2a31f09-kube-api-access-fcgl8\") pod \"multus-admission-controller-857f4d67dd-wcq7s\" (UID: \"3e96a31b-2ea2-4c88-9454-e44ba2a31f09\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.682140 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9pd\" (UniqueName: \"kubernetes.io/projected/7e47734c-23ac-4520-a65f-77be4ca47be8-kube-api-access-xc9pd\") pod \"ingress-canary-d85fh\" (UID: \"7e47734c-23ac-4520-a65f-77be4ca47be8\") " pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.682505 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.693064 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.694134 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n6d\" (UniqueName: \"kubernetes.io/projected/ff8dff93-95cf-43ff-9206-e5a33e5d552c-kube-api-access-k5n6d\") pod \"catalog-operator-68c6474976-5wplb\" (UID: \"ff8dff93-95cf-43ff-9206-e5a33e5d552c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.694274 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.701009 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.709601 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.727558 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.729363 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.729708 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.730208 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.230190119 +0000 UTC m=+143.887309067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.730837 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.738087 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6h9\" (UniqueName: \"kubernetes.io/projected/acd541b4-fb89-444f-98c5-99a575b8b605-kube-api-access-xd6h9\") pod \"service-ca-9c57cc56f-gg2zr\" (UID: \"acd541b4-fb89-444f-98c5-99a575b8b605\") " pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.753385 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6lf\" (UniqueName: \"kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf\") pod \"collect-profiles-29524800-68n7k\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.753946 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.761862 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.785799 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.791482 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fs6pk" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.798240 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ljqx\" (UniqueName: \"kubernetes.io/projected/c9b24eb3-ad39-4d2f-a98c-da928fd85acf-kube-api-access-4ljqx\") pod \"csi-hostpathplugin-wcnx4\" (UID: \"c9b24eb3-ad39-4d2f-a98c-da928fd85acf\") " pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.798530 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d85fh" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.824226 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.832975 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.833267 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.333227976 +0000 UTC m=+143.990346924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.833331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.834323 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.334314025 +0000 UTC m=+143.991432973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.851419 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" event={"ID":"9f914dcd-a03e-4b76-beb7-abf3493fbc28","Type":"ContainerStarted","Data":"bb6861e5dc2d48a9e2e69f4059527b54a1ca5c076837682919752c09a472eebe"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.883381 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv"] Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.937957 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.938510 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.438443901 +0000 UTC m=+144.095562859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.939269 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:05 crc kubenswrapper[5023]: E0219 08:03:05.943145 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.443110154 +0000 UTC m=+144.100229102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.948163 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" event={"ID":"60b80f60-dcea-468b-9d71-a588df152168","Type":"ContainerStarted","Data":"c1ceac7886382da62211589e3ae5085de4d980271719bdde312919a5de72f73b"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.953233 5023 generic.go:334] "Generic (PLEG): container finished" podID="6f7c7288-0b1f-4c0c-9271-0b29ae23a3db" containerID="66dfa1319faf000c69e485b261432c2c87b61154df6fbcc84499d9164a95c893" exitCode=0 Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.953306 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" event={"ID":"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db","Type":"ContainerDied","Data":"66dfa1319faf000c69e485b261432c2c87b61154df6fbcc84499d9164a95c893"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.953339 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" event={"ID":"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db","Type":"ContainerStarted","Data":"d76491e92a532e98cdfb08aa268b50239633148121c4e05852688158ca247458"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.962126 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" event={"ID":"d1772c08-71ce-47f2-be19-6b588dd6e7d5","Type":"ContainerStarted","Data":"a5159fd7803be180e4de8f1769d878ccad847bd851af067b98dd7c97ade98b01"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.962175 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" event={"ID":"d1772c08-71ce-47f2-be19-6b588dd6e7d5","Type":"ContainerStarted","Data":"acc2582ccdb4cf4a688459f67f2f6ab2467dd68e804e8d908c665d617f47c924"} Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.962569 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.965534 5023 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mrmbc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 08:03:05 crc kubenswrapper[5023]: I0219 08:03:05.965606 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.039304 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.044041 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.048888 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.548848323 +0000 UTC m=+144.205967271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.049385 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t2bq8" event={"ID":"1949d038-0d2f-49f5-be36-8ed7a890264c","Type":"ContainerStarted","Data":"208d032e5e5679df1ab30cb2b5b82c52fef419b0c989edb38c5ddc2909a8b3f3"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.052422 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bsqp5"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.063861 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" event={"ID":"46f2d3f1-2dad-40b9-aa13-78c000643917","Type":"ContainerStarted","Data":"0c7c0279b4a59943c6e4a6abca1f686bd8ef0ec2b1282b23fb0cd38982cacf8c"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.063920 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" event={"ID":"46f2d3f1-2dad-40b9-aa13-78c000643917","Type":"ContainerStarted","Data":"12ed5af1c4e637f3d48539b2901aacfdd071bbe499e29c5471cb7ad1176b448d"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.066347 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.088504 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.100552 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" event={"ID":"78a61028-ddc3-4560-8fe7-83deff82f5d7","Type":"ContainerStarted","Data":"508c98bda8ed177e3818cc9494e5407c2d9728273b8f0f1bfc9a9c565679be1a"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.100608 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" event={"ID":"78a61028-ddc3-4560-8fe7-83deff82f5d7","Type":"ContainerStarted","Data":"d02b7a0b286157a67de586f8ddf8ad72f95f4e8c299d9fba637ddaed354b8b32"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.120142 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" event={"ID":"ce979ece-fcf5-4ecb-895c-067f82b9927c","Type":"ContainerStarted","Data":"953c3ec4ecae7324f45339f38c665682bbc14527f15fa7db6e1025266cf624d0"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.120288 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" event={"ID":"ce979ece-fcf5-4ecb-895c-067f82b9927c","Type":"ContainerStarted","Data":"7976d5310705eb3fd97e8935c2633f2cfb5de1338f91987b2ce855a06395b165"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.120750 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.121952 5023 patch_prober.go:28] interesting pod/console-operator-58897d9998-8qx2d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.122084 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" podUID="ce979ece-fcf5-4ecb-895c-067f82b9927c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.124428 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" event={"ID":"09d951f5-0719-4876-b71c-034c74a7e27d","Type":"ContainerStarted","Data":"2ea9386a32fed8bd028d01d09320730f39c38cfd9ab6272c03d45167d4b1dea0"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.136406 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t88r2" event={"ID":"473d61a9-cdf6-4f1b-9727-ec1f00482f00","Type":"ContainerStarted","Data":"093a7d75fb9fd2a004ae94929b142936d9146d7919cbace7b318fa87e304de72"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.143644 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" event={"ID":"88d11d81-41f6-47db-826a-9a0d3f2d6049","Type":"ContainerStarted","Data":"ce59f9faa24002f7222bcc60b128feb446d5c726fd66b211943db494d11b01e6"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.145929 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.146941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.147491 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.647465334 +0000 UTC m=+144.304584282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.157831 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" event={"ID":"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7","Type":"ContainerStarted","Data":"38767dea9c0ffae22ce10c3be6a75794944a239dad07bee21742157cfb60e92d"} Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.183112 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.183222 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.247992 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.248138 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.748105748 +0000 UTC m=+144.405224696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.248526 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.250020 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.749996258 +0000 UTC m=+144.407115266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.322528 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.338312 5023 csr.go:261] certificate signing request csr-cr68b is approved, waiting to be issued Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.346981 5023 csr.go:257] certificate signing request csr-cr68b is issued Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.351497 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.351671 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.851635418 +0000 UTC m=+144.508754366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.352045 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.353246 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.853226451 +0000 UTC m=+144.510345399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: W0219 08:03:06.381911 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda642adbe_beb8_43c1_aedc_d0bc9c35f049.slice/crio-baa60192908639828944a2d0eba28bcc0409c8a28cd8bb2b00e3b90c22549feb WatchSource:0}: Error finding container baa60192908639828944a2d0eba28bcc0409c8a28cd8bb2b00e3b90c22549feb: Status 404 returned error can't find the container with id baa60192908639828944a2d0eba28bcc0409c8a28cd8bb2b00e3b90c22549feb Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.454533 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.456925 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.956881734 +0000 UTC m=+144.614000682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.460245 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.464076 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:06.964047924 +0000 UTC m=+144.621166872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.469197 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wcq7s"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.561314 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.562052 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.062032648 +0000 UTC m=+144.719151596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.588269 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xggn" podStartSLOduration=122.588244952 podStartE2EDuration="2m2.588244952s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:06.560785255 +0000 UTC m=+144.217904193" watchObservedRunningTime="2026-02-19 08:03:06.588244952 +0000 UTC m=+144.245363900" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.626025 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" podStartSLOduration=122.626005031 podStartE2EDuration="2m2.626005031s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:06.624291296 +0000 UTC m=+144.281410244" watchObservedRunningTime="2026-02-19 08:03:06.626005031 +0000 UTC m=+144.283123979" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.645716 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fzc2t"] Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.663408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.664081 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.164059129 +0000 UTC m=+144.821178077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.670844 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" podStartSLOduration=122.670812998 podStartE2EDuration="2m2.670812998s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:06.670248363 +0000 UTC m=+144.327367311" watchObservedRunningTime="2026-02-19 08:03:06.670812998 +0000 UTC m=+144.327931956" Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.765139 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.765471 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.265432742 +0000 UTC m=+144.922551690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.766103 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.769421 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.269406117 +0000 UTC m=+144.926525065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.875613 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.876095 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.375959188 +0000 UTC m=+145.033078136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:06 crc kubenswrapper[5023]: I0219 08:03:06.977547 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:06 crc kubenswrapper[5023]: E0219 08:03:06.978005 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.477985059 +0000 UTC m=+145.135104007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.079922 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.080238 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.580221455 +0000 UTC m=+145.237340403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.178857 5023 generic.go:334] "Generic (PLEG): container finished" podID="09d951f5-0719-4876-b71c-034c74a7e27d" containerID="7a35dcf7267adcdd3123ba6b7568e9e89ec981bd73e71bb7d2707ccbff6df26d" exitCode=0 Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.178947 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" event={"ID":"09d951f5-0719-4876-b71c-034c74a7e27d","Type":"ContainerDied","Data":"7a35dcf7267adcdd3123ba6b7568e9e89ec981bd73e71bb7d2707ccbff6df26d"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.181926 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.182353 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.682338998 +0000 UTC m=+145.339457936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.188966 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" event={"ID":"ac2444b2-3e6c-4704-b065-abf105add63c","Type":"ContainerStarted","Data":"cf16e95222c0de8dfdfc859f44d16adf519214e5efd05429c5c08dfb96a29c3d"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.191156 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" event={"ID":"3e96a31b-2ea2-4c88-9454-e44ba2a31f09","Type":"ContainerStarted","Data":"bf122433123c4fe353127987b28f786f0bef14aa40d5a6e5278387ebe9d155a2"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.193291 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" event={"ID":"a642adbe-beb8-43c1-aedc-d0bc9c35f049","Type":"ContainerStarted","Data":"baa60192908639828944a2d0eba28bcc0409c8a28cd8bb2b00e3b90c22549feb"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.224334 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" event={"ID":"4088dce2-3801-4d93-be23-fd29006fd89c","Type":"ContainerStarted","Data":"5760a40c14e8d374be6b66be11dc68514e948e2593b44313ea7889338281de7a"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.226986 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p865l"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.237088 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" event={"ID":"88d11d81-41f6-47db-826a-9a0d3f2d6049","Type":"ContainerStarted","Data":"0025fcdabca68f1a8c7fc4308fd005c40de772f7fb89671be7b9506bb6bd96ba"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.250750 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.250794 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" event={"ID":"80a649ee-bc87-4ba9-9b01-2760d76d78cd","Type":"ContainerStarted","Data":"fef05f15f9dfd5dcd47e18dff87d16cf7a9e82609e1d742ed63960b95086d451"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.278447 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.283658 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t88r2" event={"ID":"473d61a9-cdf6-4f1b-9727-ec1f00482f00","Type":"ContainerStarted","Data":"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.288556 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.290441 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.790405369 +0000 UTC m=+145.447524317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.314427 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fs6pk" event={"ID":"53d3533a-66eb-471a-84b0-90d7319fe13e","Type":"ContainerStarted","Data":"508aabf2f52c9b928363d9ceb61e62da4fd11be630e03d179a58bf7103484c16"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.314501 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fs6pk" event={"ID":"53d3533a-66eb-471a-84b0-90d7319fe13e","Type":"ContainerStarted","Data":"174e305a011b95b43ebca99843b6e5e0193a1a2003e1c8e3ae46d37d91d5c7c8"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.319467 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" event={"ID":"a9180272-479b-49b4-a59d-cf76b537331c","Type":"ContainerStarted","Data":"09e58acc806cb0157852f5748c9c1602a0c3ff450e20e224cc2cc38c0a2e484d"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.342377 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" event={"ID":"42a67254-cc33-40f4-ad79-2fcfdac7871e","Type":"ContainerStarted","Data":"3d76a978786427c58fccbdd94e9370dc921dd7d476e2d5b32db6ffaba2f11143"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.343000 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" event={"ID":"42a67254-cc33-40f4-ad79-2fcfdac7871e","Type":"ContainerStarted","Data":"980799891cc299e4c96bb7c9fd7e29adbe17e2dfaae86c865b9ebd792579e85e"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.343087 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.345668 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" event={"ID":"9f914dcd-a03e-4b76-beb7-abf3493fbc28","Type":"ContainerStarted","Data":"d3f10247db480ad3161cf866518e2dd8aba17ab539c4a856674e7e4fa97fba91"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.348024 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 07:58:06 +0000 UTC, rotation deadline is 2026-11-29 02:38:14.847679585 +0000 UTC Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.348100 5023 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6786h35m7.499582319s for next certificate rotation Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.354264 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" event={"ID":"4de168c8-11e8-4d1a-b20d-5753b288f5d6","Type":"ContainerStarted","Data":"6bac5997b189e8eddf8924b45ec1ee2cf72c9bb2cf8ad1ab00d0a395205ebe25"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.361487 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" event={"ID":"78a61028-ddc3-4560-8fe7-83deff82f5d7","Type":"ContainerStarted","Data":"0cd37377c6962642a48d8a85172f6402e568898fdbf869ba2bb44585efff62a3"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.366691 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.394801 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.397656 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" event={"ID":"94da16c8-dcc7-4cd7-945f-0d6ab6220956","Type":"ContainerStarted","Data":"042345a5082c1894397b8342743fce2f7efaa02bb7dbc23ba1d45caa44dda72d"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.397710 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" event={"ID":"94da16c8-dcc7-4cd7-945f-0d6ab6220956","Type":"ContainerStarted","Data":"ac75cc859afe02d10a5286d1ca3c9157afa17c7d28d01076a250b27b0409f43d"} Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.401467 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:07.901447679 +0000 UTC m=+145.558566627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.441742 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" event={"ID":"60b80f60-dcea-468b-9d71-a588df152168","Type":"ContainerStarted","Data":"be1609c6c0e8b21fdea4db79eaf8c16d1c406336066a58ca81aed9a3bcd9430b"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.501553 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.503270 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.003243073 +0000 UTC m=+145.660362021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.523073 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.523531 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t2bq8" event={"ID":"1949d038-0d2f-49f5-be36-8ed7a890264c","Type":"ContainerStarted","Data":"3f9f06f627823333c46cfe8e9a5a2b11f0231b8a6a8965fc88daf7eef666f276"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.530226 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.530368 5023 patch_prober.go:28] interesting pod/downloads-7954f5f757-t2bq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.531700 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t2bq8" podUID="1949d038-0d2f-49f5-be36-8ed7a890264c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.535317 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce633a6d-590d-49af-9daa-b1e1c2cdfbf7" containerID="af979280fab5556e51a86a7389b3c495ad6be51b2e2b178572ab81227be5a5b4" exitCode=0 Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.535392 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" event={"ID":"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7","Type":"ContainerDied","Data":"af979280fab5556e51a86a7389b3c495ad6be51b2e2b178572ab81227be5a5b4"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.537962 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" event={"ID":"2b400757-85ec-48a0-a962-1388812039fd","Type":"ContainerStarted","Data":"faaf1f508e2cc27331aeb7367a563c26e336e6e85d480d42e199023ea7c826e8"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.537984 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" event={"ID":"2b400757-85ec-48a0-a962-1388812039fd","Type":"ContainerStarted","Data":"7e8dcaacfb9041227743876a9b2bc0edf7584144c5bb0b5aabf39a96cc18a07c"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.551658 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" event={"ID":"dda08cd9-0a13-4887-b853-7677fad599f8","Type":"ContainerStarted","Data":"3c3c14c6f9295a6a51d76dd3e45385438da96cd936786bbddcdf623f7e7bb2e3"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.552918 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d85fh"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.606213 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wcnx4"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.610849 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.616126 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.620505 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.120464636 +0000 UTC m=+145.777583584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.637402 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xbwvq" event={"ID":"fbde3d31-de3b-4e70-b558-a1e4a4326cfe","Type":"ContainerStarted","Data":"b58a3d7603bd2553f1cc139cdef84955a0f41695636f93810ed663cf32c68b8d"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.637464 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xbwvq" event={"ID":"fbde3d31-de3b-4e70-b558-a1e4a4326cfe","Type":"ContainerStarted","Data":"a46698b58925db11a65f7d58b44c34636b6a4b6c12e10811926fd0de3712f155"} Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.644417 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4z6gh" podStartSLOduration=122.64438911 podStartE2EDuration="2m2.64438911s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:07.641113613 +0000 UTC m=+145.298232561" watchObservedRunningTime="2026-02-19 08:03:07.64438911 +0000 UTC m=+145.301508058" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.667045 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8qx2d" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.674670 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.734504 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.736070 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.236038816 +0000 UTC m=+145.893157764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.759449 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.767072 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:07 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:07 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:07 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.767125 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.852591 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t88r2" podStartSLOduration=123.852573361 podStartE2EDuration="2m3.852573361s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:07.852450507 +0000 UTC m=+145.509569455" watchObservedRunningTime="2026-02-19 08:03:07.852573361 +0000 UTC m=+145.509692309" Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.853249 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:07 crc kubenswrapper[5023]: E0219 08:03:07.853553 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.353538096 +0000 UTC m=+146.010657044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.969323 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.979419 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r9spm"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.982143 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:03:07 crc kubenswrapper[5023]: I0219 08:03:07.988099 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd"] Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.008293 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5dl48"] Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.010488 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gg2zr"] Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.013916 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc"] Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.030027 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.030387 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.530368767 +0000 UTC m=+146.187487715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.034810 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fs6pk" podStartSLOduration=6.034785864 podStartE2EDuration="6.034785864s" podCreationTimestamp="2026-02-19 08:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.018534314 +0000 UTC m=+145.675653262" watchObservedRunningTime="2026-02-19 08:03:08.034785864 +0000 UTC m=+145.691904812" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.053473 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t2bq8" podStartSLOduration=124.053452148 podStartE2EDuration="2m4.053452148s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.053274653 +0000 UTC m=+145.710393601" watchObservedRunningTime="2026-02-19 08:03:08.053452148 +0000 UTC m=+145.710571096" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.133284 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kdsm7" podStartSLOduration=124.133263061 podStartE2EDuration="2m4.133263061s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.099150758 +0000 UTC m=+145.756269706" watchObservedRunningTime="2026-02-19 08:03:08.133263061 +0000 UTC m=+145.790381999" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.134929 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.135356 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.635340906 +0000 UTC m=+146.292459854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.144188 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xsnwk" podStartSLOduration=123.144159109 podStartE2EDuration="2m3.144159109s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.133874567 +0000 UTC m=+145.790993505" watchObservedRunningTime="2026-02-19 08:03:08.144159109 +0000 UTC m=+145.801278057" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.237128 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.237975 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.737951822 +0000 UTC m=+146.395070770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.268464 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xbwvq" podStartSLOduration=123.268443259 podStartE2EDuration="2m3.268443259s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.26545942 +0000 UTC m=+145.922578378" watchObservedRunningTime="2026-02-19 08:03:08.268443259 +0000 UTC m=+145.925562207" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.342796 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.343789 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.843775143 +0000 UTC m=+146.500894091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.444268 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.444902 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:08.94487485 +0000 UTC m=+146.601993798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.551915 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.552854 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.052838908 +0000 UTC m=+146.709957856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.653765 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.653998 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.153964455 +0000 UTC m=+146.811083403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.654161 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.654554 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.15453572 +0000 UTC m=+146.811654668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.655849 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" event={"ID":"94da16c8-dcc7-4cd7-945f-0d6ab6220956","Type":"ContainerStarted","Data":"56661257ff7be8be37fa2104847ae45619d1a62197fb1e7a3a2dd7a3742e6cd3"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.679840 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" event={"ID":"3e96a31b-2ea2-4c88-9454-e44ba2a31f09","Type":"ContainerStarted","Data":"5e17256c4be575bf900248bc6c6b574ec3863a100e4ffdf3b15c0bc2e882cb1a"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.692773 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" event={"ID":"a642adbe-beb8-43c1-aedc-d0bc9c35f049","Type":"ContainerStarted","Data":"f07e201f634880b51ef3c01696f73592947ac9e2ba6bb82e2fd6188f9404c430"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.726868 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" event={"ID":"b5facb92-ff56-4794-89a8-7aa3278d46a4","Type":"ContainerStarted","Data":"7a1f1590aea1b5d8b198562584908fd2f709740f6bdf23e0ebd395af89c5ac7d"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.726938 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" event={"ID":"b5facb92-ff56-4794-89a8-7aa3278d46a4","Type":"ContainerStarted","Data":"7bfe4d97fc9d8dd8af0d8640fd8f8b7e0a5021b4fbf57fc8dc1a09cfec237c6b"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.739117 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" event={"ID":"4de168c8-11e8-4d1a-b20d-5753b288f5d6","Type":"ContainerStarted","Data":"e7c604d5c61488f3def76a0d8b4a6ae4a0dc2396fc830315ebe5cfa60d473d37"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.739153 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" event={"ID":"4de168c8-11e8-4d1a-b20d-5753b288f5d6","Type":"ContainerStarted","Data":"f8122d1d227f60ca4147f352d1c13cef824ba0b56b5730a0aaf1d38d06d173e4"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.739714 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.740794 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" event={"ID":"ff8dff93-95cf-43ff-9206-e5a33e5d552c","Type":"ContainerStarted","Data":"98eea1ef0c9fb366e8454ea72491f79034a57b68d1ca58c31704d69be3d0cde0"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.741401 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" event={"ID":"c9b24eb3-ad39-4d2f-a98c-da928fd85acf","Type":"ContainerStarted","Data":"9cde6a81b12f66fdf571c2010402aed233b40440bbb7d2db64d81ecc0895750e"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.742384 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" event={"ID":"dda08cd9-0a13-4887-b853-7677fad599f8","Type":"ContainerStarted","Data":"daec253ae4783ad60ee2f123cb60af2d0a8e0dd08604dfa69ac83cb9a1e5cfc8"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.743133 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.748503 5023 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dj68g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.748898 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" podUID="dda08cd9-0a13-4887-b853-7677fad599f8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.769294 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4r8mr" podStartSLOduration=123.769273387 podStartE2EDuration="2m3.769273387s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.702710985 +0000 UTC m=+146.359829933" watchObservedRunningTime="2026-02-19 08:03:08.769273387 +0000 UTC m=+146.426392335" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.770311 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.770951 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fxtnp" podStartSLOduration=123.770943091 podStartE2EDuration="2m3.770943091s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.769680538 +0000 UTC m=+146.426799486" watchObservedRunningTime="2026-02-19 08:03:08.770943091 +0000 UTC m=+146.428062039" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.771594 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.271564738 +0000 UTC m=+146.928683686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.772783 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:08 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:08 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:08 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.772828 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.816445 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" podStartSLOduration=123.816419445 podStartE2EDuration="2m3.816419445s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.815031278 +0000 UTC m=+146.472150226" watchObservedRunningTime="2026-02-19 08:03:08.816419445 +0000 UTC m=+146.473538393" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.857539 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" podStartSLOduration=123.857519263 podStartE2EDuration="2m3.857519263s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.857382119 +0000 UTC m=+146.514501067" watchObservedRunningTime="2026-02-19 08:03:08.857519263 +0000 UTC m=+146.514638211" Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.871638 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.872031 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.372016597 +0000 UTC m=+147.029135545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.884242 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" event={"ID":"42a67254-cc33-40f4-ad79-2fcfdac7871e","Type":"ContainerStarted","Data":"e0ecbd285a625241414b42c2cd7110c341ccafd419395db170e0e93fed52afff"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.921544 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d85fh" event={"ID":"7e47734c-23ac-4520-a65f-77be4ca47be8","Type":"ContainerStarted","Data":"0463321229a81d149ed01f6e20c85ec4959c50f717046bda0f6180e6597b7386"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.921594 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d85fh" event={"ID":"7e47734c-23ac-4520-a65f-77be4ca47be8","Type":"ContainerStarted","Data":"d67a93ff734da144556f54d3685e6f894ee76c316fada9ac581424194f1d02bf"} Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.986352 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:08 crc kubenswrapper[5023]: E0219 08:03:08.987440 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.487411662 +0000 UTC m=+147.144530610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:08 crc kubenswrapper[5023]: I0219 08:03:08.995562 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" event={"ID":"80a649ee-bc87-4ba9-9b01-2760d76d78cd","Type":"ContainerStarted","Data":"d3ba8254b5b3a7114fe525f00b93576d5497b62530f15f5b894999be4ff27949"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.024043 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r9spm" event={"ID":"60e8cb05-b158-4ea1-938b-0b0b55e254bb","Type":"ContainerStarted","Data":"87632a4fdb0d54b0171ead4325f0febf7da39c9b4279ec074370911fe00ede21"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.064054 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" podStartSLOduration=125.06402256 podStartE2EDuration="2m5.06402256s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.011269373 +0000 UTC m=+146.668388321" watchObservedRunningTime="2026-02-19 08:03:09.06402256 +0000 UTC m=+146.721141508" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.065940 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" podStartSLOduration=125.06593244 podStartE2EDuration="2m5.06593244s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:08.897403719 +0000 UTC m=+146.554522667" watchObservedRunningTime="2026-02-19 08:03:09.06593244 +0000 UTC m=+146.723051388" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.089098 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.090676 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.590663045 +0000 UTC m=+147.247781993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.118257 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d85fh" podStartSLOduration=7.118235895 podStartE2EDuration="7.118235895s" podCreationTimestamp="2026-02-19 08:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.109069982 +0000 UTC m=+146.766188930" watchObservedRunningTime="2026-02-19 08:03:09.118235895 +0000 UTC m=+146.775354843" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.121783 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" event={"ID":"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db","Type":"ContainerStarted","Data":"cf4e6c921b193cff1f554f909661df21959bd86b753fc96d19bbd114731ce81a"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.121856 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" event={"ID":"6f7c7288-0b1f-4c0c-9271-0b29ae23a3db","Type":"ContainerStarted","Data":"b6749f2896fd5a0f44def421de143344508b0c61f3bfac3c8cd43e1c9f3f229e"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.178353 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" event={"ID":"a9180272-479b-49b4-a59d-cf76b537331c","Type":"ContainerStarted","Data":"000e943f9e6d98c3f9483a63342a046fe699770c6980c317347b287c41ab4f3b"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.192916 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bsqp5" podStartSLOduration=125.192898591 podStartE2EDuration="2m5.192898591s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.178688975 +0000 UTC m=+146.835807923" watchObservedRunningTime="2026-02-19 08:03:09.192898591 +0000 UTC m=+146.850017539" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.194154 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.197892 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.697865613 +0000 UTC m=+147.354984561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.198411 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.199163 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" event={"ID":"f9faa084-e9ab-434b-a79a-47f6bc2bc55a","Type":"ContainerStarted","Data":"7534547307a4b3653107d30d0cec63b7bf9d1d566bdde0c623c8416fd7c961c2"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.199226 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" event={"ID":"f9faa084-e9ab-434b-a79a-47f6bc2bc55a","Type":"ContainerStarted","Data":"c7bbc423ad9e2a4e2b01d59ff90e769408137d2270e907549f370f69b5a0ff91"} Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.201897 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.701879079 +0000 UTC m=+147.358998027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.222247 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" event={"ID":"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1","Type":"ContainerStarted","Data":"1e6a3b728eac6fb72df4c80e53f9425bfdde6cee3057d3bd0dfd24dea7f773ff"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.222718 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.229492 5023 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xxg6k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.229583 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.242887 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" event={"ID":"4088dce2-3801-4d93-be23-fd29006fd89c","Type":"ContainerStarted","Data":"b89f118bc78b645a8cbb774da4200ee0d58818449875cb1251392f5b8b7947f9"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.266154 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" event={"ID":"90253cba-9740-4814-b299-03914a8402e9","Type":"ContainerStarted","Data":"46a4285f763fed188dd6b62f61bc9cf8dd69d732fe80ce065dc6101126cc7ff5"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.279211 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" event={"ID":"2b400757-85ec-48a0-a962-1388812039fd","Type":"ContainerStarted","Data":"f28f13aee50cad35f62a801bda3bd0bfe99617e4a32c86e6e5f6d0d9b568e79c"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.284596 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" event={"ID":"ac2444b2-3e6c-4704-b065-abf105add63c","Type":"ContainerStarted","Data":"4181226703db5e57deb6948468a4142403311d56e86d23ec5269b627489ed360"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.285654 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.286844 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" event={"ID":"88d11d81-41f6-47db-826a-9a0d3f2d6049","Type":"ContainerStarted","Data":"cc3cb32b95056ba7936afcfabbff68f6936134ee5058ae0df10771125ee8670a"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.288087 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" event={"ID":"f725450a-8f6d-4e4c-8526-a42157f1004b","Type":"ContainerStarted","Data":"e3253d68989ff3dbb5c461ed9e6f925acd11817607b624e491adb31f5b6a3c3c"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.288113 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" event={"ID":"f725450a-8f6d-4e4c-8526-a42157f1004b","Type":"ContainerStarted","Data":"2ff4098571365b25507911d51c15d885ec01900a354ae2fafa189e21aaed5203"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.288668 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.299581 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.306647 5023 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xnlzc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.306902 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" podUID="f725450a-8f6d-4e4c-8526-a42157f1004b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.307669 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.807634288 +0000 UTC m=+147.464753236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.314299 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" event={"ID":"acd541b4-fb89-444f-98c5-99a575b8b605","Type":"ContainerStarted","Data":"c71dd64b8e6e3f62ed6b4d4c22bfae3f14aeae42f6479a87f0c1b460dda78eb4"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.314456 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" event={"ID":"acd541b4-fb89-444f-98c5-99a575b8b605","Type":"ContainerStarted","Data":"2912f90438bbc1e0886efb5ff344a1fc697ae2047f9854cb4bc33641a168a914"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.333362 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" event={"ID":"10c016d6-83ef-40e3-81f3-fff5008a34d8","Type":"ContainerStarted","Data":"1f3ef1c2cbee097ea633531541043c36d038da2996059efc30c03b59247c30f5"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.333643 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" event={"ID":"10c016d6-83ef-40e3-81f3-fff5008a34d8","Type":"ContainerStarted","Data":"4cf3b42da1c4548829de0fc445d38ed20b7e7491df3ce0dc7662b58adcd44378"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.348091 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" event={"ID":"686daed9-9edb-4929-b686-ed1611d57ca3","Type":"ContainerStarted","Data":"37b4b181ad9df847ab7a04c9636ceeb394a9e5076c004beab5f862928ae7cf37"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.348379 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" event={"ID":"686daed9-9edb-4929-b686-ed1611d57ca3","Type":"ContainerStarted","Data":"a8f4cb665f1144c51befc3b2085a87d39306870fca7f33b133b7d3bf283d3d76"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.357194 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" event={"ID":"834629bf-75a3-4241-b3ce-2aec76e34a3b","Type":"ContainerStarted","Data":"73b3201a7dd66b02addaa85208ef15ebbf8e6afcf3f823da8b6cd7cf963b044d"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.357266 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" event={"ID":"834629bf-75a3-4241-b3ce-2aec76e34a3b","Type":"ContainerStarted","Data":"6f68af1e62bdfd912dc105b12bd514b3a0f5decc447b7126eeec709686651908"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.358176 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.362565 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" event={"ID":"9f007d16-9224-42da-a0cd-86099e2846c0","Type":"ContainerStarted","Data":"f3d9c5618f1da4d6d6dfc81279f93b89ce3774979bfd102eee02e5df61b6ab2d"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.362609 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" event={"ID":"9f007d16-9224-42da-a0cd-86099e2846c0","Type":"ContainerStarted","Data":"7688006bd4085bfdb1054cd3ec000527932758a58125a6ad380b8b88ce3b00d5"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.380940 5023 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-66ljm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.381012 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.390065 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" event={"ID":"ce633a6d-590d-49af-9daa-b1e1c2cdfbf7","Type":"ContainerStarted","Data":"19b80398ebbe425c975cd85213d628282ecc0484896ad614e11466ce7d8a82a6"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.390894 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.404836 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.408656 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:09.908633902 +0000 UTC m=+147.565752850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.414798 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" event={"ID":"09d951f5-0719-4876-b71c-034c74a7e27d","Type":"ContainerStarted","Data":"094649059ef4222da0153e436be42dd6843e9dd4f0df28841389ff8296a148cf"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.420316 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fzc2t" podStartSLOduration=124.420296061 podStartE2EDuration="2m4.420296061s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.299206265 +0000 UTC m=+146.956325213" watchObservedRunningTime="2026-02-19 08:03:09.420296061 +0000 UTC m=+147.077415009" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.422424 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" podStartSLOduration=124.422417827 podStartE2EDuration="2m4.422417827s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.421737639 +0000 UTC m=+147.078856587" watchObservedRunningTime="2026-02-19 08:03:09.422417827 +0000 UTC m=+147.079536775" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.439788 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" event={"ID":"3df7d4af-b1dc-4065-8694-be7eeb1956e4","Type":"ContainerStarted","Data":"b86603dc3cac53580939167fd6fc82cc739ecf643f5f164ca68307b16202139c"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.458229 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2plfz" podStartSLOduration=125.458205644 podStartE2EDuration="2m5.458205644s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.457373572 +0000 UTC m=+147.114492520" watchObservedRunningTime="2026-02-19 08:03:09.458205644 +0000 UTC m=+147.115324592" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.466264 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" event={"ID":"29ffae09-b2f3-4313-a3a3-86eebe4f2794","Type":"ContainerStarted","Data":"d62c7df9d91d28871ea8b953aa9ab3b63bb5c534ecf53747789c26ed741d407f"} Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.475795 5023 patch_prober.go:28] interesting pod/downloads-7954f5f757-t2bq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.475874 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t2bq8" podUID="1949d038-0d2f-49f5-be36-8ed7a890264c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.516428 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.518452 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.018419057 +0000 UTC m=+147.675538175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.545667 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpsw6" podStartSLOduration=124.545606747 podStartE2EDuration="2m4.545606747s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.532606083 +0000 UTC m=+147.189725021" watchObservedRunningTime="2026-02-19 08:03:09.545606747 +0000 UTC m=+147.202725695" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.607735 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" podStartSLOduration=125.607712941 podStartE2EDuration="2m5.607712941s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.578080346 +0000 UTC m=+147.235199294" watchObservedRunningTime="2026-02-19 08:03:09.607712941 +0000 UTC m=+147.264831889" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.621931 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.623273 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.624508 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.637283 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.137258173 +0000 UTC m=+147.794377121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.644423 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lwsv" podStartSLOduration=124.644400412 podStartE2EDuration="2m4.644400412s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.609156299 +0000 UTC m=+147.266275237" watchObservedRunningTime="2026-02-19 08:03:09.644400412 +0000 UTC m=+147.301519360" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.671830 5023 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mrpgz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.671926 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" podUID="6f7c7288-0b1f-4c0c-9271-0b29ae23a3db" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.724804 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.725114 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.225032446 +0000 UTC m=+147.882151394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.725647 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.726200 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.226177117 +0000 UTC m=+147.883296065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.743179 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gg2zr" podStartSLOduration=124.743157326 podStartE2EDuration="2m4.743157326s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.671326965 +0000 UTC m=+147.328445923" watchObservedRunningTime="2026-02-19 08:03:09.743157326 +0000 UTC m=+147.400276274" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.743852 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lhf48" podStartSLOduration=124.743846465 podStartE2EDuration="2m4.743846465s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.743326681 +0000 UTC m=+147.400445629" watchObservedRunningTime="2026-02-19 08:03:09.743846465 +0000 UTC m=+147.400965413" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.759845 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.759915 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.769856 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:09 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:09 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:09 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.770227 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.822216 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fmnng" podStartSLOduration=124.822174008 podStartE2EDuration="2m4.822174008s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.820575606 +0000 UTC m=+147.477694574" watchObservedRunningTime="2026-02-19 08:03:09.822174008 +0000 UTC m=+147.479292966" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.835364 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.835880 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.33585716 +0000 UTC m=+147.992976108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.887696 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bb2xv" podStartSLOduration=124.865523346 podStartE2EDuration="2m4.865523346s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.862835414 +0000 UTC m=+147.519954362" watchObservedRunningTime="2026-02-19 08:03:09.865523346 +0000 UTC m=+147.522642294" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.944182 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:09 crc kubenswrapper[5023]: E0219 08:03:09.944802 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.444776424 +0000 UTC m=+148.101895372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.970810 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" podStartSLOduration=124.970791892 podStartE2EDuration="2m4.970791892s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.911054731 +0000 UTC m=+147.568173679" watchObservedRunningTime="2026-02-19 08:03:09.970791892 +0000 UTC m=+147.627910840" Feb 19 08:03:09 crc kubenswrapper[5023]: I0219 08:03:09.976464 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" podStartSLOduration=124.976433632 podStartE2EDuration="2m4.976433632s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:09.968165733 +0000 UTC m=+147.625284681" watchObservedRunningTime="2026-02-19 08:03:09.976433632 +0000 UTC m=+147.633552580" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.020320 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" podStartSLOduration=125.020298673 podStartE2EDuration="2m5.020298673s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.019552993 +0000 UTC m=+147.676671941" watchObservedRunningTime="2026-02-19 08:03:10.020298673 +0000 UTC m=+147.677417621" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.045350 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.045765 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.545745426 +0000 UTC m=+148.202864364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.121141 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" podStartSLOduration=125.121113131 podStartE2EDuration="2m5.121113131s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.078676218 +0000 UTC m=+147.735795166" watchObservedRunningTime="2026-02-19 08:03:10.121113131 +0000 UTC m=+147.778232079" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.121671 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" podStartSLOduration=125.121665576 podStartE2EDuration="2m5.121665576s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.120899876 +0000 UTC m=+147.778018824" watchObservedRunningTime="2026-02-19 08:03:10.121665576 +0000 UTC m=+147.778784524" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.146877 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.147342 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.647315805 +0000 UTC m=+148.304434753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.158540 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" podStartSLOduration=125.158517402 podStartE2EDuration="2m5.158517402s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.156217641 +0000 UTC m=+147.813336589" watchObservedRunningTime="2026-02-19 08:03:10.158517402 +0000 UTC m=+147.815636350" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.209333 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" podStartSLOduration=125.209301716 podStartE2EDuration="2m5.209301716s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.208148165 +0000 UTC m=+147.865267113" watchObservedRunningTime="2026-02-19 08:03:10.209301716 +0000 UTC m=+147.866420664" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.248414 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.248558 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.748536475 +0000 UTC m=+148.405655423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.248875 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.249282 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.749273714 +0000 UTC m=+148.406392662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.262377 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.266979 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" podStartSLOduration=125.266948982 podStartE2EDuration="2m5.266948982s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.264081086 +0000 UTC m=+147.921200034" watchObservedRunningTime="2026-02-19 08:03:10.266948982 +0000 UTC m=+147.924067940" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.354819 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.354990 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.854956412 +0000 UTC m=+148.512075360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.355131 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.355548 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.855530847 +0000 UTC m=+148.512649795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.456564 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.456798 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.956765267 +0000 UTC m=+148.613884215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.456968 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.457514 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:10.957506586 +0000 UTC m=+148.614625534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.482035 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" event={"ID":"3df7d4af-b1dc-4065-8694-be7eeb1956e4","Type":"ContainerStarted","Data":"6949e3aad75e03e97b794540cdc298b0ee29e3b939be0a0ff6fc18496823c91c"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.482091 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hh9kd" event={"ID":"3df7d4af-b1dc-4065-8694-be7eeb1956e4","Type":"ContainerStarted","Data":"8acf8987b1785e40a4ec7db68743f139979a13a5603548f31a60f9863d0b2419"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.496948 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" event={"ID":"ff8dff93-95cf-43ff-9206-e5a33e5d552c","Type":"ContainerStarted","Data":"da618457001d664ec1c1bd2eff8d4c8a0315b40e1842e568710c321b65e23824"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.498420 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.513027 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p865l" event={"ID":"10c016d6-83ef-40e3-81f3-fff5008a34d8","Type":"ContainerStarted","Data":"3245eb4dfd73ba83f5d7271d29205ab6ab655e6acc94f31c4d2ddee28f5f0c12"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.523440 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.523590 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" event={"ID":"3e96a31b-2ea2-4c88-9454-e44ba2a31f09","Type":"ContainerStarted","Data":"685482d16020d41e1a02cea0ad5f08bd254d52df63d7981e98bc4fdb633d3d1a"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.536065 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" event={"ID":"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1","Type":"ContainerStarted","Data":"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.537424 5023 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xxg6k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.537471 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.557009 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" event={"ID":"90253cba-9740-4814-b299-03914a8402e9","Type":"ContainerStarted","Data":"fa511f1cbad1f9654520e6f2236c073f85f8cbb8f18befe528ecadb13e5b8afe"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.558338 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.559225 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.059199338 +0000 UTC m=+148.716318286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.595944 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5dl48" event={"ID":"29ffae09-b2f3-4313-a3a3-86eebe4f2794","Type":"ContainerStarted","Data":"4eb9500350cbca2759f860057a9dbfe1a17cbb2c67800a65b54c719c7f12d927"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.640993 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5wplb" podStartSLOduration=125.640971393 podStartE2EDuration="2m5.640971393s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.577136393 +0000 UTC m=+148.234255331" watchObservedRunningTime="2026-02-19 08:03:10.640971393 +0000 UTC m=+148.298090341" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.641541 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r9spm" event={"ID":"60e8cb05-b158-4ea1-938b-0b0b55e254bb","Type":"ContainerStarted","Data":"968294671f024942ab7807a26d7908d9604023e667d3dbfcf37c5212de5a5df4"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.641594 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r9spm" event={"ID":"60e8cb05-b158-4ea1-938b-0b0b55e254bb","Type":"ContainerStarted","Data":"32a379e413e4d89bed2783cbdffa57e95c93988c7b43391a4a570821ffe270b4"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.641685 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.667911 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.670527 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.170511075 +0000 UTC m=+148.827630023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.675814 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" event={"ID":"c9b24eb3-ad39-4d2f-a98c-da928fd85acf","Type":"ContainerStarted","Data":"e02edb37bc2202c08ecc35130c2ff4d9f6214bc7e2fbac767d36e2862441e1c6"} Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.693941 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.758185 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:10 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:10 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:10 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.758438 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.769197 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.769647 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.269609928 +0000 UTC m=+148.926728876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.795995 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xnlzc" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.824969 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.870557 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.873162 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.373143589 +0000 UTC m=+149.030262527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.973786 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j9bmc" podStartSLOduration=125.973760792 podStartE2EDuration="2m5.973760792s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:10.963181622 +0000 UTC m=+148.620300560" watchObservedRunningTime="2026-02-19 08:03:10.973760792 +0000 UTC m=+148.630879740" Feb 19 08:03:10 crc kubenswrapper[5023]: I0219 08:03:10.976010 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:10 crc kubenswrapper[5023]: E0219 08:03:10.976557 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.476538946 +0000 UTC m=+149.133657894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.050645 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wcq7s" podStartSLOduration=126.050601376 podStartE2EDuration="2m6.050601376s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:11.038879646 +0000 UTC m=+148.695998594" watchObservedRunningTime="2026-02-19 08:03:11.050601376 +0000 UTC m=+148.707720314" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.078256 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.078690 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.578671489 +0000 UTC m=+149.235790437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.148605 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dj68g" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.179482 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.180279 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.680244578 +0000 UTC m=+149.337363526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.291294 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.292033 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.792020037 +0000 UTC m=+149.449138985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.385075 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r9spm" podStartSLOduration=9.38505302 podStartE2EDuration="9.38505302s" podCreationTimestamp="2026-02-19 08:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:11.321555719 +0000 UTC m=+148.978674667" watchObservedRunningTime="2026-02-19 08:03:11.38505302 +0000 UTC m=+149.042171968" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.392768 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.392958 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.395009 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.894975973 +0000 UTC m=+149.552094921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.407474 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.496595 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.496655 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.496688 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.496731 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.497141 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:11.997126317 +0000 UTC m=+149.654245265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.508602 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.514589 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.515641 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.530104 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.556124 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.602437 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.602842 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.102814895 +0000 UTC m=+149.759933853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.704201 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.704974 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.204961639 +0000 UTC m=+149.862080587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.715849 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" event={"ID":"c9b24eb3-ad39-4d2f-a98c-da928fd85acf","Type":"ContainerStarted","Data":"78effd5f324530b3cacac22093a5dd3eff3399b8a27631f80911dbe9db715c05"} Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.758413 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:11 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:11 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:11 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.758482 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.758877 5023 generic.go:334] "Generic (PLEG): container finished" podID="b5facb92-ff56-4794-89a8-7aa3278d46a4" containerID="7a1f1590aea1b5d8b198562584908fd2f709740f6bdf23e0ebd395af89c5ac7d" exitCode=0 Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.758941 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" event={"ID":"b5facb92-ff56-4794-89a8-7aa3278d46a4","Type":"ContainerDied","Data":"7a1f1590aea1b5d8b198562584908fd2f709740f6bdf23e0ebd395af89c5ac7d"} Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.760878 5023 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xxg6k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.760907 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.775100 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dc2n7" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.805095 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.806360 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.807754 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.307724849 +0000 UTC m=+149.964843797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.808523 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.839251 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.339000437 +0000 UTC m=+149.996119385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.892664 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.893330 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.914216 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:11 crc kubenswrapper[5023]: E0219 08:03:11.916059 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.416034416 +0000 UTC m=+150.073153364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:11 crc kubenswrapper[5023]: I0219 08:03:11.931180 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nvtc8" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.020468 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.020968 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.520954523 +0000 UTC m=+150.178073461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.028522 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.029987 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.049473 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.052356 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.122276 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.122952 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlm44\" (UniqueName: \"kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.123049 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.123083 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.123231 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.62320702 +0000 UTC m=+150.280325968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.211565 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.217802 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.228937 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.229639 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlm44\" (UniqueName: \"kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.229700 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.229731 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.229763 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.230044 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.730032318 +0000 UTC m=+150.387151266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.230495 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.230548 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.230553 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.268353 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlm44\" (UniqueName: \"kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44\") pod \"certified-operators-q274g\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.330312 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.330924 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znww\" (UniqueName: \"kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.330992 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.331016 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.331142 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.831122374 +0000 UTC m=+150.488241322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.357115 5023 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.386237 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.414863 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.415963 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.426861 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.433327 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znww\" (UniqueName: \"kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.433451 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.434013 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.434069 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.437109 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:12.937082249 +0000 UTC m=+150.594201197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.437155 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.435245 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.462855 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znww\" (UniqueName: \"kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww\") pod \"community-operators-hmqg6\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.543136 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.543885 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.543919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzpc\" (UniqueName: \"kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.543991 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.544168 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 08:03:13.044148223 +0000 UTC m=+150.701267171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.588967 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.628448 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.630207 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.641451 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.645673 5023 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T08:03:12.357151293Z","Handler":null,"Name":""} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.646670 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.646732 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.646754 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.646773 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzpc\" (UniqueName: \"kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: E0219 08:03:12.647368 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 08:03:13.147353905 +0000 UTC m=+150.804472853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rndqk" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.647722 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.647782 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.674597 5023 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.674650 5023 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.686501 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzpc\" (UniqueName: \"kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc\") pod \"certified-operators-cntt2\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.754282 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.756228 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.756380 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzc7n\" (UniqueName: \"kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.756491 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.771221 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:12 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:12 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:12 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.771596 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.773830 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.789792 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.799415 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3714621ee56ba65f11c9cdc1477db12245a1f5fb5427b588e81a94d9c2929d20"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.801459 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"789f5c3088060d013cf78d846f37365afb87de43e2e0c64ca250e70449f30ab8"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.801483 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"03232e73ea171b5805e359a92eca8e43991a1f2516f5a53f3eb4219f2490a06b"} Feb 19 08:03:12 crc kubenswrapper[5023]: W0219 08:03:12.808591 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d82228e_e1cf_4274_8b24_5468d4c46e38.slice/crio-6c8cad5b694e55fd419c70856a6f1fd5d100b364e8a4ba93b52b070cd7bd06ef WatchSource:0}: Error finding container 6c8cad5b694e55fd419c70856a6f1fd5d100b364e8a4ba93b52b070cd7bd06ef: Status 404 returned error can't find the container with id 6c8cad5b694e55fd419c70856a6f1fd5d100b364e8a4ba93b52b070cd7bd06ef Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.809238 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9bc44b4f689b99468c18141d91926d9cfa5fdb6dffd0701ec199185a3cd650cb"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.809311 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90b62b7e8777bf9e57c2e5a759d811a594620241314d2802e1a117aa859d591b"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.811124 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.827181 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" event={"ID":"c9b24eb3-ad39-4d2f-a98c-da928fd85acf","Type":"ContainerStarted","Data":"01da24322be567d53731e665169a0f0153a961f58ad98263cd83b34598d911f5"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.827232 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" event={"ID":"c9b24eb3-ad39-4d2f-a98c-da928fd85acf","Type":"ContainerStarted","Data":"0bb62c9f061a2d1c6991965a423f7d9778a4b32d2f71b8fa79ac1c4e85b1e5aa"} Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.858081 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.858153 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.858191 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzc7n\" (UniqueName: \"kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.858216 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.860355 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.869883 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.878272 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.878355 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.901109 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzc7n\" (UniqueName: \"kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n\") pod \"community-operators-8sp47\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.924951 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wcnx4" podStartSLOduration=10.924913693 podStartE2EDuration="10.924913693s" podCreationTimestamp="2026-02-19 08:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:12.899088859 +0000 UTC m=+150.556207807" watchObservedRunningTime="2026-02-19 08:03:12.924913693 +0000 UTC m=+150.582032641" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.981199 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rndqk\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:12 crc kubenswrapper[5023]: I0219 08:03:12.990064 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.124783 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.225363 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.237935 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.285280 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:03:13 crc kubenswrapper[5023]: W0219 08:03:13.287047 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f33f560_79f7_4acd_b439_22e6969ca87c.slice/crio-f224f4e37d4896aa33a3ef3d5a4d679597433522e4a6f651fabad2e4334819b6 WatchSource:0}: Error finding container f224f4e37d4896aa33a3ef3d5a4d679597433522e4a6f651fabad2e4334819b6: Status 404 returned error can't find the container with id f224f4e37d4896aa33a3ef3d5a4d679597433522e4a6f651fabad2e4334819b6 Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.287367 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.395885 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.468757 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6lf\" (UniqueName: \"kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf\") pod \"b5facb92-ff56-4794-89a8-7aa3278d46a4\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.468874 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume\") pod \"b5facb92-ff56-4794-89a8-7aa3278d46a4\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.469034 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume\") pod \"b5facb92-ff56-4794-89a8-7aa3278d46a4\" (UID: \"b5facb92-ff56-4794-89a8-7aa3278d46a4\") " Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.469878 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5facb92-ff56-4794-89a8-7aa3278d46a4" (UID: "b5facb92-ff56-4794-89a8-7aa3278d46a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.477085 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf" (OuterVolumeSpecName: "kube-api-access-wp6lf") pod "b5facb92-ff56-4794-89a8-7aa3278d46a4" (UID: "b5facb92-ff56-4794-89a8-7aa3278d46a4"). InnerVolumeSpecName "kube-api-access-wp6lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.482070 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5facb92-ff56-4794-89a8-7aa3278d46a4" (UID: "b5facb92-ff56-4794-89a8-7aa3278d46a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.491902 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.570145 5023 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5facb92-ff56-4794-89a8-7aa3278d46a4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.570177 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6lf\" (UniqueName: \"kubernetes.io/projected/b5facb92-ff56-4794-89a8-7aa3278d46a4-kube-api-access-wp6lf\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.570191 5023 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5facb92-ff56-4794-89a8-7aa3278d46a4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.759272 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:13 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:13 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:13 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.759336 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.832827 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" event={"ID":"a9b9ec2c-86d2-40e9-b7bb-e7af21612798","Type":"ContainerStarted","Data":"93690281830da2190097f46acec109c7978c61a85724e7ae3e9e8af570260431"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.832884 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" event={"ID":"a9b9ec2c-86d2-40e9-b7bb-e7af21612798","Type":"ContainerStarted","Data":"1bcae242847527734a508338c062cd36da787cf5df4496c0f1dfe4c086069b8e"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.832926 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.834851 5023 generic.go:334] "Generic (PLEG): container finished" podID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerID="87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4" exitCode=0 Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.834911 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerDied","Data":"87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.834928 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerStarted","Data":"f224f4e37d4896aa33a3ef3d5a4d679597433522e4a6f651fabad2e4334819b6"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.836543 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" event={"ID":"b5facb92-ff56-4794-89a8-7aa3278d46a4","Type":"ContainerDied","Data":"7bfe4d97fc9d8dd8af0d8640fd8f8b7e0a5021b4fbf57fc8dc1a09cfec237c6b"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.836564 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524800-68n7k" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.836570 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfe4d97fc9d8dd8af0d8640fd8f8b7e0a5021b4fbf57fc8dc1a09cfec237c6b" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.838232 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.838976 5023 generic.go:334] "Generic (PLEG): container finished" podID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerID="986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66" exitCode=0 Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.839048 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerDied","Data":"986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.839075 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerStarted","Data":"11c7af989449e75ceadd542831eca611c0d2e4021d58132f60e8a0805e888842"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.846231 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"86dde6e09e91f1a87cf8b5e41476f6ca6ec58e9b2bce0a940573bb2ea5619dc8"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.846904 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.848714 5023 generic.go:334] "Generic (PLEG): container finished" podID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerID="635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6" exitCode=0 Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.848800 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerDied","Data":"635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.848822 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerStarted","Data":"be49261beb984fe665781bc69d7464ccf757b717672af80c0365e1d94904f2d7"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.851136 5023 generic.go:334] "Generic (PLEG): container finished" podID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerID="ea5d0664c794877cb7931705148bac489d83b578556c33cdb651210aa5cc39d3" exitCode=0 Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.851650 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerDied","Data":"ea5d0664c794877cb7931705148bac489d83b578556c33cdb651210aa5cc39d3"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.851776 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerStarted","Data":"6c8cad5b694e55fd419c70856a6f1fd5d100b364e8a4ba93b52b070cd7bd06ef"} Feb 19 08:03:13 crc kubenswrapper[5023]: I0219 08:03:13.872824 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" podStartSLOduration=128.872796634 podStartE2EDuration="2m8.872796634s" podCreationTimestamp="2026-02-19 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:13.866375804 +0000 UTC m=+151.523494762" watchObservedRunningTime="2026-02-19 08:03:13.872796634 +0000 UTC m=+151.529915582" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.045766 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:03:14 crc kubenswrapper[5023]: E0219 08:03:14.045988 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5facb92-ff56-4794-89a8-7aa3278d46a4" containerName="collect-profiles" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.046000 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5facb92-ff56-4794-89a8-7aa3278d46a4" containerName="collect-profiles" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.046104 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5facb92-ff56-4794-89a8-7aa3278d46a4" containerName="collect-profiles" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.046497 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.049872 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.052331 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.058438 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.178985 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.179103 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.206402 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.207462 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.209688 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.226006 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.280416 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.280536 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.280820 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.310735 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.360358 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.382481 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9tf\" (UniqueName: \"kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.382548 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.382650 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.484275 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.484422 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.484470 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9tf\" (UniqueName: \"kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.485933 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.486084 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.508907 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9tf\" (UniqueName: \"kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf\") pod \"redhat-marketplace-mcd4q\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.538526 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.612092 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.619950 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.633203 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.635154 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.642508 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mrpgz" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.785897 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:14 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:14 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:14 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.786028 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.799243 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.799350 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gbt\" (UniqueName: \"kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.799585 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.818874 5023 patch_prober.go:28] interesting pod/downloads-7954f5f757-t2bq8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.818942 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t2bq8" podUID="1949d038-0d2f-49f5-be36-8ed7a890264c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.819180 5023 patch_prober.go:28] interesting pod/downloads-7954f5f757-t2bq8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.819216 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t2bq8" podUID="1949d038-0d2f-49f5-be36-8ed7a890264c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.869516 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.869606 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.883988 5023 patch_prober.go:28] interesting pod/console-f9d7485db-t88r2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.884047 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t88r2" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.903802 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.903895 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gbt\" (UniqueName: \"kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.903975 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.904344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.907186 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.930550 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.935774 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gbt\" (UniqueName: \"kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt\") pod \"redhat-marketplace-fqrqx\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.947499 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:14 crc kubenswrapper[5023]: I0219 08:03:14.957469 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:03:14 crc kubenswrapper[5023]: W0219 08:03:14.989158 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3821bfef_83d2_421f_b316_00e277a9341d.slice/crio-518b2ac7b53552ee86dadf63bc8cb692ce1346b979857dd3bc7ccceff189a764 WatchSource:0}: Error finding container 518b2ac7b53552ee86dadf63bc8cb692ce1346b979857dd3bc7ccceff189a764: Status 404 returned error can't find the container with id 518b2ac7b53552ee86dadf63bc8cb692ce1346b979857dd3bc7ccceff189a764 Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.225029 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.226523 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.236335 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.253077 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.318443 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.318891 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.318975 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrl5\" (UniqueName: \"kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.421286 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.421587 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.421809 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrl5\" (UniqueName: \"kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.422885 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.423301 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.469778 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrl5\" (UniqueName: \"kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5\") pod \"redhat-operators-2cdmv\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.501137 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:03:15 crc kubenswrapper[5023]: W0219 08:03:15.562560 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e2a6d5_58be_494c_b034_b5d81da8e46d.slice/crio-faf9b91a95c88de746917052d1ca76aab94085ff0cd1ca1d9b633ef939d37c22 WatchSource:0}: Error finding container faf9b91a95c88de746917052d1ca76aab94085ff0cd1ca1d9b633ef939d37c22: Status 404 returned error can't find the container with id faf9b91a95c88de746917052d1ca76aab94085ff0cd1ca1d9b633ef939d37c22 Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.605214 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.613451 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.634213 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.685957 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.735172 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.735463 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.735597 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt54m\" (UniqueName: \"kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.745684 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.755561 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.762132 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:15 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:15 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:15 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.762217 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.837024 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.837564 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.839352 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.839689 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.839793 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt54m\" (UniqueName: \"kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.873982 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt54m\" (UniqueName: \"kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m\") pod \"redhat-operators-cbcq7\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.939751 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a976e995-091d-475d-8948-d2b7b375925d","Type":"ContainerStarted","Data":"8c733d436a0422544f2b47929c73c9fc611c06f97e3423f6e2fbaf6e4b77a651"} Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.940121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a976e995-091d-475d-8948-d2b7b375925d","Type":"ContainerStarted","Data":"37e27223e253653f0bd03d33925328033db662aaaf602689f8cc942fce1ab421"} Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.951987 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerStarted","Data":"faf9b91a95c88de746917052d1ca76aab94085ff0cd1ca1d9b633ef939d37c22"} Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.963598 5023 generic.go:334] "Generic (PLEG): container finished" podID="3821bfef-83d2-421f-b316-00e277a9341d" containerID="17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8" exitCode=0 Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.963682 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerDied","Data":"17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8"} Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.963737 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerStarted","Data":"518b2ac7b53552ee86dadf63bc8cb692ce1346b979857dd3bc7ccceff189a764"} Feb 19 08:03:15 crc kubenswrapper[5023]: I0219 08:03:15.964047 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.964029532 podStartE2EDuration="1.964029532s" podCreationTimestamp="2026-02-19 08:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:15.96129373 +0000 UTC m=+153.618412678" watchObservedRunningTime="2026-02-19 08:03:15.964029532 +0000 UTC m=+153.621148470" Feb 19 08:03:16 crc kubenswrapper[5023]: I0219 08:03:16.035342 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:16 crc kubenswrapper[5023]: I0219 08:03:16.276125 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:03:16 crc kubenswrapper[5023]: I0219 08:03:16.582302 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:03:16 crc kubenswrapper[5023]: I0219 08:03:16.761048 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:16 crc kubenswrapper[5023]: [-]has-synced failed: reason withheld Feb 19 08:03:16 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:16 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:16 crc kubenswrapper[5023]: I0219 08:03:16.761111 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.016952 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.017817 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.021672 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.022390 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.023836 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.028231 5023 generic.go:334] "Generic (PLEG): container finished" podID="a976e995-091d-475d-8948-d2b7b375925d" containerID="8c733d436a0422544f2b47929c73c9fc611c06f97e3423f6e2fbaf6e4b77a651" exitCode=0 Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.028345 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a976e995-091d-475d-8948-d2b7b375925d","Type":"ContainerDied","Data":"8c733d436a0422544f2b47929c73c9fc611c06f97e3423f6e2fbaf6e4b77a651"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.031187 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerStarted","Data":"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.031220 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerStarted","Data":"6c0ec452e11f69b02edf32b6740b068bb36e6e986fde643265800891af345d0f"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.043208 5023 generic.go:334] "Generic (PLEG): container finished" podID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerID="d63760bac948405178c99e7d65b3e888a4ed3ccfa6e6dda583662bdc54392e77" exitCode=0 Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.043277 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerDied","Data":"d63760bac948405178c99e7d65b3e888a4ed3ccfa6e6dda583662bdc54392e77"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.048055 5023 generic.go:334] "Generic (PLEG): container finished" podID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerID="a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700" exitCode=0 Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.048087 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerDied","Data":"a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.048105 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerStarted","Data":"7aa0fba1c5430462f9aca60320924a17dd0f60898a51f88b2cbf1f613b65ef19"} Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.091478 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.091530 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.193344 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.193672 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.193799 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.225191 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.350985 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.757249 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.762250 5023 patch_prober.go:28] interesting pod/router-default-5444994796-xbwvq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 08:03:17 crc kubenswrapper[5023]: [+]has-synced ok Feb 19 08:03:17 crc kubenswrapper[5023]: [+]process-running ok Feb 19 08:03:17 crc kubenswrapper[5023]: healthz check failed Feb 19 08:03:17 crc kubenswrapper[5023]: I0219 08:03:17.762720 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbwvq" podUID="fbde3d31-de3b-4e70-b558-a1e4a4326cfe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.061405 5023 generic.go:334] "Generic (PLEG): container finished" podID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerID="9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a" exitCode=0 Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.061469 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerDied","Data":"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a"} Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.064030 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1","Type":"ContainerStarted","Data":"aeb9a82a17ed33fc8f8ad6015a7ea4f224f5fdbc76b9313f9220991e1a863fd4"} Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.377087 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.434918 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access\") pod \"a976e995-091d-475d-8948-d2b7b375925d\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.435122 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir\") pod \"a976e995-091d-475d-8948-d2b7b375925d\" (UID: \"a976e995-091d-475d-8948-d2b7b375925d\") " Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.435733 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a976e995-091d-475d-8948-d2b7b375925d" (UID: "a976e995-091d-475d-8948-d2b7b375925d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.467933 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a976e995-091d-475d-8948-d2b7b375925d" (UID: "a976e995-091d-475d-8948-d2b7b375925d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.538004 5023 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a976e995-091d-475d-8948-d2b7b375925d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.538046 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a976e995-091d-475d-8948-d2b7b375925d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.764717 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:18 crc kubenswrapper[5023]: I0219 08:03:18.767393 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xbwvq" Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.098162 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-xsnx6_42a67254-cc33-40f4-ad79-2fcfdac7871e/cluster-samples-operator/0.log" Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.098470 5023 generic.go:334] "Generic (PLEG): container finished" podID="42a67254-cc33-40f4-ad79-2fcfdac7871e" containerID="3d76a978786427c58fccbdd94e9370dc921dd7d476e2d5b32db6ffaba2f11143" exitCode=2 Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.098597 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" event={"ID":"42a67254-cc33-40f4-ad79-2fcfdac7871e","Type":"ContainerDied","Data":"3d76a978786427c58fccbdd94e9370dc921dd7d476e2d5b32db6ffaba2f11143"} Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.099456 5023 scope.go:117] "RemoveContainer" containerID="3d76a978786427c58fccbdd94e9370dc921dd7d476e2d5b32db6ffaba2f11143" Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.112429 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a976e995-091d-475d-8948-d2b7b375925d","Type":"ContainerDied","Data":"37e27223e253653f0bd03d33925328033db662aaaf602689f8cc942fce1ab421"} Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.112467 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e27223e253653f0bd03d33925328033db662aaaf602689f8cc942fce1ab421" Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.112490 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.132530 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1","Type":"ContainerStarted","Data":"12c4fbaa4d76f3f99e3560a19346d2791bac9a9f00628643d2ae29bf69dd0899"} Feb 19 08:03:19 crc kubenswrapper[5023]: I0219 08:03:19.161578 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.161553374 podStartE2EDuration="3.161553374s" podCreationTimestamp="2026-02-19 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:19.16140092 +0000 UTC m=+156.818519868" watchObservedRunningTime="2026-02-19 08:03:19.161553374 +0000 UTC m=+156.818672322" Feb 19 08:03:20 crc kubenswrapper[5023]: I0219 08:03:20.199060 5023 generic.go:334] "Generic (PLEG): container finished" podID="1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" containerID="12c4fbaa4d76f3f99e3560a19346d2791bac9a9f00628643d2ae29bf69dd0899" exitCode=0 Feb 19 08:03:20 crc kubenswrapper[5023]: I0219 08:03:20.200121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1","Type":"ContainerDied","Data":"12c4fbaa4d76f3f99e3560a19346d2791bac9a9f00628643d2ae29bf69dd0899"} Feb 19 08:03:20 crc kubenswrapper[5023]: I0219 08:03:20.205973 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-xsnx6_42a67254-cc33-40f4-ad79-2fcfdac7871e/cluster-samples-operator/0.log" Feb 19 08:03:20 crc kubenswrapper[5023]: I0219 08:03:20.206068 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xsnx6" event={"ID":"42a67254-cc33-40f4-ad79-2fcfdac7871e","Type":"ContainerStarted","Data":"9f4c2f0496456824faf6685db1e63a268d49d5a2908a46f2452cb789951fc287"} Feb 19 08:03:20 crc kubenswrapper[5023]: I0219 08:03:20.791094 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r9spm" Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.705366 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.813826 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access\") pod \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.813928 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir\") pod \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\" (UID: \"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1\") " Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.814283 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" (UID: "1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.824404 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" (UID: "1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.915360 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:21 crc kubenswrapper[5023]: I0219 08:03:21.915399 5023 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:22 crc kubenswrapper[5023]: I0219 08:03:22.269000 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1","Type":"ContainerDied","Data":"aeb9a82a17ed33fc8f8ad6015a7ea4f224f5fdbc76b9313f9220991e1a863fd4"} Feb 19 08:03:22 crc kubenswrapper[5023]: I0219 08:03:22.269148 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 08:03:22 crc kubenswrapper[5023]: I0219 08:03:22.269159 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeb9a82a17ed33fc8f8ad6015a7ea4f224f5fdbc76b9313f9220991e1a863fd4" Feb 19 08:03:24 crc kubenswrapper[5023]: I0219 08:03:24.824315 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t2bq8" Feb 19 08:03:25 crc kubenswrapper[5023]: I0219 08:03:25.095771 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:25 crc kubenswrapper[5023]: I0219 08:03:25.099927 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:03:26 crc kubenswrapper[5023]: I0219 08:03:26.596519 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:03:26 crc kubenswrapper[5023]: I0219 08:03:26.615710 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e27029b-2441-4434-bbd8-849e96acc2da-metrics-certs\") pod \"network-metrics-daemon-bdvrm\" (UID: \"9e27029b-2441-4434-bbd8-849e96acc2da\") " pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:03:26 crc kubenswrapper[5023]: I0219 08:03:26.706081 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdvrm" Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.125684 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.125949 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" containerID="cri-o://a5159fd7803be180e4de8f1769d878ccad847bd851af067b98dd7c97ade98b01" gracePeriod=30 Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.143248 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.143579 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerName="route-controller-manager" containerID="cri-o://73b3201a7dd66b02addaa85208ef15ebbf8e6afcf3f823da8b6cd7cf963b044d" gracePeriod=30 Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.353973 5023 generic.go:334] "Generic (PLEG): container finished" podID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerID="73b3201a7dd66b02addaa85208ef15ebbf8e6afcf3f823da8b6cd7cf963b044d" exitCode=0 Feb 19 08:03:29 crc kubenswrapper[5023]: I0219 08:03:29.354068 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" event={"ID":"834629bf-75a3-4241-b3ce-2aec76e34a3b","Type":"ContainerDied","Data":"73b3201a7dd66b02addaa85208ef15ebbf8e6afcf3f823da8b6cd7cf963b044d"} Feb 19 08:03:30 crc kubenswrapper[5023]: I0219 08:03:30.364614 5023 generic.go:334] "Generic (PLEG): container finished" podID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerID="a5159fd7803be180e4de8f1769d878ccad847bd851af067b98dd7c97ade98b01" exitCode=0 Feb 19 08:03:30 crc kubenswrapper[5023]: I0219 08:03:30.364706 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" event={"ID":"d1772c08-71ce-47f2-be19-6b588dd6e7d5","Type":"ContainerDied","Data":"a5159fd7803be180e4de8f1769d878ccad847bd851af067b98dd7c97ade98b01"} Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.133771 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.249201 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.294694 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:33 crc kubenswrapper[5023]: E0219 08:03:33.294979 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.294999 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: E0219 08:03:33.295011 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerName="route-controller-manager" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295019 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerName="route-controller-manager" Feb 19 08:03:33 crc kubenswrapper[5023]: E0219 08:03:33.295032 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a976e995-091d-475d-8948-d2b7b375925d" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295040 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a976e995-091d-475d-8948-d2b7b375925d" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295158 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a976e995-091d-475d-8948-d2b7b375925d" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295170 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1363fd76-cf8c-4a6f-9aa2-af6fe9e60ec1" containerName="pruner" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295181 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" containerName="route-controller-manager" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.295692 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.325024 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.325356 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config\") pod \"834629bf-75a3-4241-b3ce-2aec76e34a3b\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.325441 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca\") pod \"834629bf-75a3-4241-b3ce-2aec76e34a3b\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.325518 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg6z4\" (UniqueName: \"kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4\") pod \"834629bf-75a3-4241-b3ce-2aec76e34a3b\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.325605 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert\") pod \"834629bf-75a3-4241-b3ce-2aec76e34a3b\" (UID: \"834629bf-75a3-4241-b3ce-2aec76e34a3b\") " Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.326145 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.326187 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.326278 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.326320 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2p9\" (UniqueName: \"kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.327450 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config" (OuterVolumeSpecName: "config") pod "834629bf-75a3-4241-b3ce-2aec76e34a3b" (UID: "834629bf-75a3-4241-b3ce-2aec76e34a3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.327918 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca" (OuterVolumeSpecName: "client-ca") pod "834629bf-75a3-4241-b3ce-2aec76e34a3b" (UID: "834629bf-75a3-4241-b3ce-2aec76e34a3b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.363309 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "834629bf-75a3-4241-b3ce-2aec76e34a3b" (UID: "834629bf-75a3-4241-b3ce-2aec76e34a3b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.389045 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4" (OuterVolumeSpecName: "kube-api-access-jg6z4") pod "834629bf-75a3-4241-b3ce-2aec76e34a3b" (UID: "834629bf-75a3-4241-b3ce-2aec76e34a3b"). InnerVolumeSpecName "kube-api-access-jg6z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.389731 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" event={"ID":"834629bf-75a3-4241-b3ce-2aec76e34a3b","Type":"ContainerDied","Data":"6f68af1e62bdfd912dc105b12bd514b3a0f5decc447b7126eeec709686651908"} Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.389794 5023 scope.go:117] "RemoveContainer" containerID="73b3201a7dd66b02addaa85208ef15ebbf8e6afcf3f823da8b6cd7cf963b044d" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.389943 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.429936 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431597 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431718 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2p9\" (UniqueName: \"kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431788 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431809 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431900 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431912 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/834629bf-75a3-4241-b3ce-2aec76e34a3b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431921 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg6z4\" (UniqueName: \"kubernetes.io/projected/834629bf-75a3-4241-b3ce-2aec76e34a3b-kube-api-access-jg6z4\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.431931 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834629bf-75a3-4241-b3ce-2aec76e34a3b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.433160 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.433381 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.443797 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-66ljm"] Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.451890 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2p9\" (UniqueName: \"kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.453469 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert\") pod \"route-controller-manager-8df789f69-wk2pj\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.487077 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834629bf-75a3-4241-b3ce-2aec76e34a3b" path="/var/lib/kubelet/pods/834629bf-75a3-4241-b3ce-2aec76e34a3b/volumes" Feb 19 08:03:33 crc kubenswrapper[5023]: I0219 08:03:33.632230 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:35 crc kubenswrapper[5023]: I0219 08:03:35.738346 5023 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mrmbc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 08:03:35 crc kubenswrapper[5023]: I0219 08:03:35.738828 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.410741 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" event={"ID":"d1772c08-71ce-47f2-be19-6b588dd6e7d5","Type":"ContainerDied","Data":"acc2582ccdb4cf4a688459f67f2f6ab2467dd68e804e8d908c665d617f47c924"} Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.411191 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc2582ccdb4cf4a688459f67f2f6ab2467dd68e804e8d908c665d617f47c924" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.418724 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.493336 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca\") pod \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.493449 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles\") pod \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.493479 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config\") pod \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.493527 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj467\" (UniqueName: \"kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467\") pod \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.495685 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert\") pod \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\" (UID: \"d1772c08-71ce-47f2-be19-6b588dd6e7d5\") " Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.496848 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d1772c08-71ce-47f2-be19-6b588dd6e7d5" (UID: "d1772c08-71ce-47f2-be19-6b588dd6e7d5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.497392 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1772c08-71ce-47f2-be19-6b588dd6e7d5" (UID: "d1772c08-71ce-47f2-be19-6b588dd6e7d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.498299 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config" (OuterVolumeSpecName: "config") pod "d1772c08-71ce-47f2-be19-6b588dd6e7d5" (UID: "d1772c08-71ce-47f2-be19-6b588dd6e7d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.499295 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:36 crc kubenswrapper[5023]: E0219 08:03:36.499555 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.499580 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.500112 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" containerName="controller-manager" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.500469 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467" (OuterVolumeSpecName: "kube-api-access-dj467") pod "d1772c08-71ce-47f2-be19-6b588dd6e7d5" (UID: "d1772c08-71ce-47f2-be19-6b588dd6e7d5"). InnerVolumeSpecName "kube-api-access-dj467". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.501475 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.506426 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1772c08-71ce-47f2-be19-6b588dd6e7d5" (UID: "d1772c08-71ce-47f2-be19-6b588dd6e7d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.516125 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.597806 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.597893 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.597976 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xzkf\" (UniqueName: \"kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598015 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598038 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598095 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj467\" (UniqueName: \"kubernetes.io/projected/d1772c08-71ce-47f2-be19-6b588dd6e7d5-kube-api-access-dj467\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598108 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1772c08-71ce-47f2-be19-6b588dd6e7d5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598120 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598132 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.598140 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1772c08-71ce-47f2-be19-6b588dd6e7d5-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.700142 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.700239 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.700303 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xzkf\" (UniqueName: \"kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.700367 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.700439 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.701647 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.702068 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.702307 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.705384 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.720578 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xzkf\" (UniqueName: \"kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf\") pod \"controller-manager-76bddb996f-8mws9\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:36 crc kubenswrapper[5023]: I0219 08:03:36.848311 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:37 crc kubenswrapper[5023]: I0219 08:03:37.416725 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrmbc" Feb 19 08:03:37 crc kubenswrapper[5023]: I0219 08:03:37.445611 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:37 crc kubenswrapper[5023]: I0219 08:03:37.448450 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrmbc"] Feb 19 08:03:37 crc kubenswrapper[5023]: I0219 08:03:37.483902 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1772c08-71ce-47f2-be19-6b588dd6e7d5" path="/var/lib/kubelet/pods/d1772c08-71ce-47f2-be19-6b588dd6e7d5/volumes" Feb 19 08:03:41 crc kubenswrapper[5023]: I0219 08:03:41.870243 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:03:41 crc kubenswrapper[5023]: I0219 08:03:41.871345 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.034637 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.035455 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlm44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q274g_openshift-marketplace(4d82228e-e1cf-4274-8b24-5468d4c46e38): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.037171 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q274g" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.038257 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.039027 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cntt2_openshift-marketplace(6c8fc31b-73c3-4a18-bf3a-d684464c7625): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.040714 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cntt2" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.335747 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.389073 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs4qh" Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.391260 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bdvrm"] Feb 19 08:03:45 crc kubenswrapper[5023]: W0219 08:03:45.402247 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e27029b_2441_4434_bbd8_849e96acc2da.slice/crio-5836ec1dd8631b7e9284d1edbdbed1d1af350307b2cfa5d593ecff2a20ee910b WatchSource:0}: Error finding container 5836ec1dd8631b7e9284d1edbdbed1d1af350307b2cfa5d593ecff2a20ee910b: Status 404 returned error can't find the container with id 5836ec1dd8631b7e9284d1edbdbed1d1af350307b2cfa5d593ecff2a20ee910b Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.457491 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.567758 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" event={"ID":"4ac059f3-bfc6-438d-ac27-9f11f2029386","Type":"ContainerStarted","Data":"1746f3a2d394593c4ff2a2621da73e7ae11b0727ca474896064a893f2a57f7d3"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.570384 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerStarted","Data":"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.597490 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerStarted","Data":"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.613017 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerStarted","Data":"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.616275 5023 generic.go:334] "Generic (PLEG): container finished" podID="3821bfef-83d2-421f-b316-00e277a9341d" containerID="070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7" exitCode=0 Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.616324 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerDied","Data":"070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.622455 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" event={"ID":"9e27029b-2441-4434-bbd8-849e96acc2da","Type":"ContainerStarted","Data":"5836ec1dd8631b7e9284d1edbdbed1d1af350307b2cfa5d593ecff2a20ee910b"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.628875 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerStarted","Data":"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.633285 5023 generic.go:334] "Generic (PLEG): container finished" podID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerID="76aeccb49a51f8d227a09ff865a90352aba7580e4c2655880e1572e1d689ee58" exitCode=0 Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.633580 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerDied","Data":"76aeccb49a51f8d227a09ff865a90352aba7580e4c2655880e1572e1d689ee58"} Feb 19 08:03:45 crc kubenswrapper[5023]: I0219 08:03:45.637043 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" event={"ID":"de614c7a-565a-4c3a-ba14-354795d1b844","Type":"ContainerStarted","Data":"8998ce90fb95b9b58fc48bdb4f48905c0cbe937dddc39ce02502b0e9abf056de"} Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.642321 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cntt2" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.642687 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q274g" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" Feb 19 08:03:45 crc kubenswrapper[5023]: E0219 08:03:45.850901 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f33f560_79f7_4acd_b439_22e6969ca87c.slice/crio-conmon-f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.649599 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerDied","Data":"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.649451 5023 generic.go:334] "Generic (PLEG): container finished" podID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerID="28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0" exitCode=0 Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.656487 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" event={"ID":"de614c7a-565a-4c3a-ba14-354795d1b844","Type":"ContainerStarted","Data":"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.658750 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.661255 5023 generic.go:334] "Generic (PLEG): container finished" podID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerID="f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c" exitCode=0 Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.661332 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerDied","Data":"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.664310 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" event={"ID":"9e27029b-2441-4434-bbd8-849e96acc2da","Type":"ContainerStarted","Data":"005e9a83b88e4fb4fc4ac59fec8b5b2f2ac0e02be477e9a4f73cff383b5835fd"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.669411 5023 generic.go:334] "Generic (PLEG): container finished" podID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerID="bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8" exitCode=0 Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.669462 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerDied","Data":"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.685991 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" event={"ID":"4ac059f3-bfc6-438d-ac27-9f11f2029386","Type":"ContainerStarted","Data":"016f1edd86de837cfcee37ae4fea4bd6d98fab1daea5f1d9fa7410dd1ec78c7e"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.694099 5023 generic.go:334] "Generic (PLEG): container finished" podID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerID="3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18" exitCode=0 Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.694191 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerDied","Data":"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18"} Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.750200 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" podStartSLOduration=17.75018433 podStartE2EDuration="17.75018433s" podCreationTimestamp="2026-02-19 08:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:46.747565191 +0000 UTC m=+184.404684139" watchObservedRunningTime="2026-02-19 08:03:46.75018433 +0000 UTC m=+184.407303278" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.799158 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.832446 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" podStartSLOduration=17.832424957 podStartE2EDuration="17.832424957s" podCreationTimestamp="2026-02-19 08:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:46.829999793 +0000 UTC m=+184.487118741" watchObservedRunningTime="2026-02-19 08:03:46.832424957 +0000 UTC m=+184.489543905" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.849073 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:46 crc kubenswrapper[5023]: I0219 08:03:46.854665 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:47 crc kubenswrapper[5023]: I0219 08:03:47.703491 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdvrm" event={"ID":"9e27029b-2441-4434-bbd8-849e96acc2da","Type":"ContainerStarted","Data":"74ea6cbbc62f27f113be1c441bf9cf209144d10c9dcf7c58c5166cb7bc06b356"} Feb 19 08:03:47 crc kubenswrapper[5023]: I0219 08:03:47.720487 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bdvrm" podStartSLOduration=163.720469335 podStartE2EDuration="2m43.720469335s" podCreationTimestamp="2026-02-19 08:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:47.717611279 +0000 UTC m=+185.374730227" watchObservedRunningTime="2026-02-19 08:03:47.720469335 +0000 UTC m=+185.377588283" Feb 19 08:03:48 crc kubenswrapper[5023]: I0219 08:03:48.711512 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerStarted","Data":"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316"} Feb 19 08:03:48 crc kubenswrapper[5023]: I0219 08:03:48.729367 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mcd4q" podStartSLOduration=2.996125073 podStartE2EDuration="34.729345731s" podCreationTimestamp="2026-02-19 08:03:14 +0000 UTC" firstStartedPulling="2026-02-19 08:03:15.965713987 +0000 UTC m=+153.622832935" lastFinishedPulling="2026-02-19 08:03:47.698934645 +0000 UTC m=+185.356053593" observedRunningTime="2026-02-19 08:03:48.728062358 +0000 UTC m=+186.385181306" watchObservedRunningTime="2026-02-19 08:03:48.729345731 +0000 UTC m=+186.386464679" Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.060416 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.156223 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.731785 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerStarted","Data":"96baf770295170fe97057d439aabbbe10bc062f05ec3ae04b675a8e0244fc7c2"} Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.731954 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" podUID="de614c7a-565a-4c3a-ba14-354795d1b844" containerName="route-controller-manager" containerID="cri-o://fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8" gracePeriod=30 Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.732302 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" podUID="4ac059f3-bfc6-438d-ac27-9f11f2029386" containerName="controller-manager" containerID="cri-o://016f1edd86de837cfcee37ae4fea4bd6d98fab1daea5f1d9fa7410dd1ec78c7e" gracePeriod=30 Feb 19 08:03:49 crc kubenswrapper[5023]: I0219 08:03:49.751575 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fqrqx" podStartSLOduration=3.909736167 podStartE2EDuration="35.75155178s" podCreationTimestamp="2026-02-19 08:03:14 +0000 UTC" firstStartedPulling="2026-02-19 08:03:17.044912884 +0000 UTC m=+154.702031832" lastFinishedPulling="2026-02-19 08:03:48.886728497 +0000 UTC m=+186.543847445" observedRunningTime="2026-02-19 08:03:49.748952921 +0000 UTC m=+187.406071869" watchObservedRunningTime="2026-02-19 08:03:49.75155178 +0000 UTC m=+187.408670718" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.431454 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.457460 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:03:50 crc kubenswrapper[5023]: E0219 08:03:50.457732 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de614c7a-565a-4c3a-ba14-354795d1b844" containerName="route-controller-manager" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.457749 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="de614c7a-565a-4c3a-ba14-354795d1b844" containerName="route-controller-manager" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.457858 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="de614c7a-565a-4c3a-ba14-354795d1b844" containerName="route-controller-manager" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.458310 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.467696 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535029 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca\") pod \"de614c7a-565a-4c3a-ba14-354795d1b844\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535096 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q2p9\" (UniqueName: \"kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9\") pod \"de614c7a-565a-4c3a-ba14-354795d1b844\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535116 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config\") pod \"de614c7a-565a-4c3a-ba14-354795d1b844\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535260 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert\") pod \"de614c7a-565a-4c3a-ba14-354795d1b844\" (UID: \"de614c7a-565a-4c3a-ba14-354795d1b844\") " Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535464 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535490 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535514 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.535571 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxtp\" (UniqueName: \"kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.536298 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca" (OuterVolumeSpecName: "client-ca") pod "de614c7a-565a-4c3a-ba14-354795d1b844" (UID: "de614c7a-565a-4c3a-ba14-354795d1b844"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.537033 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config" (OuterVolumeSpecName: "config") pod "de614c7a-565a-4c3a-ba14-354795d1b844" (UID: "de614c7a-565a-4c3a-ba14-354795d1b844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.542363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9" (OuterVolumeSpecName: "kube-api-access-4q2p9") pod "de614c7a-565a-4c3a-ba14-354795d1b844" (UID: "de614c7a-565a-4c3a-ba14-354795d1b844"). InnerVolumeSpecName "kube-api-access-4q2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.542794 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de614c7a-565a-4c3a-ba14-354795d1b844" (UID: "de614c7a-565a-4c3a-ba14-354795d1b844"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637049 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637126 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637152 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637184 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxtp\" (UniqueName: \"kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637250 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637265 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q2p9\" (UniqueName: \"kubernetes.io/projected/de614c7a-565a-4c3a-ba14-354795d1b844-kube-api-access-4q2p9\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637276 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de614c7a-565a-4c3a-ba14-354795d1b844-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.637287 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de614c7a-565a-4c3a-ba14-354795d1b844-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.639888 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.640255 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.642592 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.651547 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxtp\" (UniqueName: \"kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp\") pod \"route-controller-manager-6b8cc89779-d4878\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.743193 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerStarted","Data":"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.748931 5023 generic.go:334] "Generic (PLEG): container finished" podID="4ac059f3-bfc6-438d-ac27-9f11f2029386" containerID="016f1edd86de837cfcee37ae4fea4bd6d98fab1daea5f1d9fa7410dd1ec78c7e" exitCode=0 Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.749064 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" event={"ID":"4ac059f3-bfc6-438d-ac27-9f11f2029386","Type":"ContainerDied","Data":"016f1edd86de837cfcee37ae4fea4bd6d98fab1daea5f1d9fa7410dd1ec78c7e"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.756216 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerStarted","Data":"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.758371 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerStarted","Data":"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.760852 5023 generic.go:334] "Generic (PLEG): container finished" podID="de614c7a-565a-4c3a-ba14-354795d1b844" containerID="fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8" exitCode=0 Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.760913 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" event={"ID":"de614c7a-565a-4c3a-ba14-354795d1b844","Type":"ContainerDied","Data":"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.760933 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" event={"ID":"de614c7a-565a-4c3a-ba14-354795d1b844","Type":"ContainerDied","Data":"8998ce90fb95b9b58fc48bdb4f48905c0cbe937dddc39ce02502b0e9abf056de"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.760954 5023 scope.go:117] "RemoveContainer" containerID="fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.761075 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.765845 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8sp47" podStartSLOduration=2.461385764 podStartE2EDuration="38.765833059s" podCreationTimestamp="2026-02-19 08:03:12 +0000 UTC" firstStartedPulling="2026-02-19 08:03:13.850559455 +0000 UTC m=+151.507678393" lastFinishedPulling="2026-02-19 08:03:50.15500673 +0000 UTC m=+187.812125688" observedRunningTime="2026-02-19 08:03:50.764774051 +0000 UTC m=+188.421893019" watchObservedRunningTime="2026-02-19 08:03:50.765833059 +0000 UTC m=+188.422952007" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.775452 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.781804 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerStarted","Data":"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908"} Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.794866 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2cdmv" podStartSLOduration=2.615792486 podStartE2EDuration="35.794848318s" podCreationTimestamp="2026-02-19 08:03:15 +0000 UTC" firstStartedPulling="2026-02-19 08:03:17.050244215 +0000 UTC m=+154.707363163" lastFinishedPulling="2026-02-19 08:03:50.229300047 +0000 UTC m=+187.886418995" observedRunningTime="2026-02-19 08:03:50.788819358 +0000 UTC m=+188.445938306" watchObservedRunningTime="2026-02-19 08:03:50.794848318 +0000 UTC m=+188.451967266" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.800840 5023 scope.go:117] "RemoveContainer" containerID="fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8" Feb 19 08:03:50 crc kubenswrapper[5023]: E0219 08:03:50.803386 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8\": container with ID starting with fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8 not found: ID does not exist" containerID="fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.803422 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8"} err="failed to get container status \"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8\": rpc error: code = NotFound desc = could not find container \"fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8\": container with ID starting with fd868d913478982a4589dcbd009b816f0d64c4d35f536cfd2cecb12a4d3f58b8 not found: ID does not exist" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.814923 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hmqg6" podStartSLOduration=2.534798458 podStartE2EDuration="38.814897338s" podCreationTimestamp="2026-02-19 08:03:12 +0000 UTC" firstStartedPulling="2026-02-19 08:03:13.837924651 +0000 UTC m=+151.495043599" lastFinishedPulling="2026-02-19 08:03:50.118023531 +0000 UTC m=+187.775142479" observedRunningTime="2026-02-19 08:03:50.814207 +0000 UTC m=+188.471325948" watchObservedRunningTime="2026-02-19 08:03:50.814897338 +0000 UTC m=+188.472016286" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.836734 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.847845 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8df789f69-wk2pj"] Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.867400 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbcq7" podStartSLOduration=2.366114846 podStartE2EDuration="35.867379488s" podCreationTimestamp="2026-02-19 08:03:15 +0000 UTC" firstStartedPulling="2026-02-19 08:03:17.041990546 +0000 UTC m=+154.699109494" lastFinishedPulling="2026-02-19 08:03:50.543255188 +0000 UTC m=+188.200374136" observedRunningTime="2026-02-19 08:03:50.866217227 +0000 UTC m=+188.523336175" watchObservedRunningTime="2026-02-19 08:03:50.867379488 +0000 UTC m=+188.524498446" Feb 19 08:03:50 crc kubenswrapper[5023]: I0219 08:03:50.965245 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.051418 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert\") pod \"4ac059f3-bfc6-438d-ac27-9f11f2029386\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.051488 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config\") pod \"4ac059f3-bfc6-438d-ac27-9f11f2029386\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.051583 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca\") pod \"4ac059f3-bfc6-438d-ac27-9f11f2029386\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.051614 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles\") pod \"4ac059f3-bfc6-438d-ac27-9f11f2029386\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.051671 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xzkf\" (UniqueName: \"kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf\") pod \"4ac059f3-bfc6-438d-ac27-9f11f2029386\" (UID: \"4ac059f3-bfc6-438d-ac27-9f11f2029386\") " Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.052408 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ac059f3-bfc6-438d-ac27-9f11f2029386" (UID: "4ac059f3-bfc6-438d-ac27-9f11f2029386"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.052778 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config" (OuterVolumeSpecName: "config") pod "4ac059f3-bfc6-438d-ac27-9f11f2029386" (UID: "4ac059f3-bfc6-438d-ac27-9f11f2029386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.052790 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ac059f3-bfc6-438d-ac27-9f11f2029386" (UID: "4ac059f3-bfc6-438d-ac27-9f11f2029386"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.059724 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ac059f3-bfc6-438d-ac27-9f11f2029386" (UID: "4ac059f3-bfc6-438d-ac27-9f11f2029386"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.059831 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf" (OuterVolumeSpecName: "kube-api-access-8xzkf") pod "4ac059f3-bfc6-438d-ac27-9f11f2029386" (UID: "4ac059f3-bfc6-438d-ac27-9f11f2029386"). InnerVolumeSpecName "kube-api-access-8xzkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.094067 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:03:51 crc kubenswrapper[5023]: W0219 08:03:51.100682 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod111e1778_755c_459f_9e15_be059572c236.slice/crio-1f4d0975749f9920424b21bf6909bd86b08e01134ef21d69e9325d0500e71e64 WatchSource:0}: Error finding container 1f4d0975749f9920424b21bf6909bd86b08e01134ef21d69e9325d0500e71e64: Status 404 returned error can't find the container with id 1f4d0975749f9920424b21bf6909bd86b08e01134ef21d69e9325d0500e71e64 Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.153180 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.153218 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.153235 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xzkf\" (UniqueName: \"kubernetes.io/projected/4ac059f3-bfc6-438d-ac27-9f11f2029386-kube-api-access-8xzkf\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.153244 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac059f3-bfc6-438d-ac27-9f11f2029386-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.153260 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac059f3-bfc6-438d-ac27-9f11f2029386-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.495993 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de614c7a-565a-4c3a-ba14-354795d1b844" path="/var/lib/kubelet/pods/de614c7a-565a-4c3a-ba14-354795d1b844/volumes" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.783500 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" event={"ID":"111e1778-755c-459f-9e15-be059572c236","Type":"ContainerStarted","Data":"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86"} Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.783667 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.783683 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" event={"ID":"111e1778-755c-459f-9e15-be059572c236","Type":"ContainerStarted","Data":"1f4d0975749f9920424b21bf6909bd86b08e01134ef21d69e9325d0500e71e64"} Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.786588 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.786606 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76bddb996f-8mws9" event={"ID":"4ac059f3-bfc6-438d-ac27-9f11f2029386","Type":"ContainerDied","Data":"1746f3a2d394593c4ff2a2621da73e7ae11b0727ca474896064a893f2a57f7d3"} Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.786715 5023 scope.go:117] "RemoveContainer" containerID="016f1edd86de837cfcee37ae4fea4bd6d98fab1daea5f1d9fa7410dd1ec78c7e" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.793368 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.807509 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" podStartSLOduration=2.8074827239999998 podStartE2EDuration="2.807482724s" podCreationTimestamp="2026-02-19 08:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:51.803133978 +0000 UTC m=+189.460252926" watchObservedRunningTime="2026-02-19 08:03:51.807482724 +0000 UTC m=+189.464601672" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.812761 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.847566 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:51 crc kubenswrapper[5023]: I0219 08:03:51.871048 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76bddb996f-8mws9"] Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.590170 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.590227 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.618668 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:03:52 crc kubenswrapper[5023]: E0219 08:03:52.618954 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac059f3-bfc6-438d-ac27-9f11f2029386" containerName="controller-manager" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.618967 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac059f3-bfc6-438d-ac27-9f11f2029386" containerName="controller-manager" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.619080 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac059f3-bfc6-438d-ac27-9f11f2029386" containerName="controller-manager" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.619572 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.622137 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.622334 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.622382 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.623797 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.623937 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.624081 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.632494 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.635408 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.673957 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lvz\" (UniqueName: \"kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.674036 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.674063 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.674082 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.674115 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.775900 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lvz\" (UniqueName: \"kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.776012 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.776055 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.776089 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.776136 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.777334 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.777515 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.778691 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.783967 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.802502 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lvz\" (UniqueName: \"kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz\") pod \"controller-manager-5687fb4dcf-j2lzb\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.938857 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.990929 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:52 crc kubenswrapper[5023]: I0219 08:03:52.990996 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.407731 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:03:53 crc kubenswrapper[5023]: W0219 08:03:53.425820 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2e6430c_b6e4_4df2_9906_016da16b3646.slice/crio-618c742e21282f59d4466b6fb48413e9dc3ac3aa7d2f2c919490e61299f6eef2 WatchSource:0}: Error finding container 618c742e21282f59d4466b6fb48413e9dc3ac3aa7d2f2c919490e61299f6eef2: Status 404 returned error can't find the container with id 618c742e21282f59d4466b6fb48413e9dc3ac3aa7d2f2c919490e61299f6eef2 Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.483722 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac059f3-bfc6-438d-ac27-9f11f2029386" path="/var/lib/kubelet/pods/4ac059f3-bfc6-438d-ac27-9f11f2029386/volumes" Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.502648 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.724469 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hmqg6" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" probeResult="failure" output=< Feb 19 08:03:53 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:03:53 crc kubenswrapper[5023]: > Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.801388 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" event={"ID":"a2e6430c-b6e4-4df2-9906-016da16b3646","Type":"ContainerStarted","Data":"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4"} Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.801458 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" event={"ID":"a2e6430c-b6e4-4df2-9906-016da16b3646","Type":"ContainerStarted","Data":"618c742e21282f59d4466b6fb48413e9dc3ac3aa7d2f2c919490e61299f6eef2"} Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.801686 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.803788 5023 patch_prober.go:28] interesting pod/controller-manager-5687fb4dcf-j2lzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.803840 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 19 08:03:53 crc kubenswrapper[5023]: I0219 08:03:53.830270 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" podStartSLOduration=4.830244738 podStartE2EDuration="4.830244738s" podCreationTimestamp="2026-02-19 08:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:03:53.829612151 +0000 UTC m=+191.486731109" watchObservedRunningTime="2026-02-19 08:03:53.830244738 +0000 UTC m=+191.487363686" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.032789 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8sp47" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="registry-server" probeResult="failure" output=< Feb 19 08:03:54 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:03:54 crc kubenswrapper[5023]: > Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.538953 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.539045 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.639250 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.811574 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.846825 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.947758 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.948118 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:54 crc kubenswrapper[5023]: I0219 08:03:54.992588 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.400379 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.401554 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.404739 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.410010 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.418573 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.517485 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.517706 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.619551 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.619679 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.619817 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.650374 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.686410 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.686792 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.757419 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:55 crc kubenswrapper[5023]: I0219 08:03:55.869981 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.035640 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.036022 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.217741 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 08:03:56 crc kubenswrapper[5023]: W0219 08:03:56.239377 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podce1401fb_64ea_4c75_a6b6_c9bb0e1fd56c.slice/crio-df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206 WatchSource:0}: Error finding container df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206: Status 404 returned error can't find the container with id df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206 Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.733992 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2cdmv" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="registry-server" probeResult="failure" output=< Feb 19 08:03:56 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:03:56 crc kubenswrapper[5023]: > Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.820344 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:03:56 crc kubenswrapper[5023]: I0219 08:03:56.822756 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c","Type":"ContainerStarted","Data":"df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206"} Feb 19 08:03:57 crc kubenswrapper[5023]: I0219 08:03:57.089157 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cbcq7" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="registry-server" probeResult="failure" output=< Feb 19 08:03:57 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:03:57 crc kubenswrapper[5023]: > Feb 19 08:03:57 crc kubenswrapper[5023]: I0219 08:03:57.831418 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" containerID="9fbcf07142f6b3178ecd761bcd6cd0923ced7fcf1feac1b4e0f26be7cc483f8b" exitCode=0 Feb 19 08:03:57 crc kubenswrapper[5023]: I0219 08:03:57.831591 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c","Type":"ContainerDied","Data":"9fbcf07142f6b3178ecd761bcd6cd0923ced7fcf1feac1b4e0f26be7cc483f8b"} Feb 19 08:03:57 crc kubenswrapper[5023]: I0219 08:03:57.831688 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fqrqx" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="registry-server" containerID="cri-o://96baf770295170fe97057d439aabbbe10bc062f05ec3ae04b675a8e0244fc7c2" gracePeriod=2 Feb 19 08:03:58 crc kubenswrapper[5023]: I0219 08:03:58.848521 5023 generic.go:334] "Generic (PLEG): container finished" podID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerID="96baf770295170fe97057d439aabbbe10bc062f05ec3ae04b675a8e0244fc7c2" exitCode=0 Feb 19 08:03:58 crc kubenswrapper[5023]: I0219 08:03:58.849104 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerDied","Data":"96baf770295170fe97057d439aabbbe10bc062f05ec3ae04b675a8e0244fc7c2"} Feb 19 08:03:58 crc kubenswrapper[5023]: I0219 08:03:58.964908 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.085172 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities\") pod \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.085519 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gbt\" (UniqueName: \"kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt\") pod \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.085566 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content\") pod \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\" (UID: \"e6e2a6d5-58be-494c-b034-b5d81da8e46d\") " Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.087134 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities" (OuterVolumeSpecName: "utilities") pod "e6e2a6d5-58be-494c-b034-b5d81da8e46d" (UID: "e6e2a6d5-58be-494c-b034-b5d81da8e46d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.096842 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt" (OuterVolumeSpecName: "kube-api-access-r6gbt") pod "e6e2a6d5-58be-494c-b034-b5d81da8e46d" (UID: "e6e2a6d5-58be-494c-b034-b5d81da8e46d"). InnerVolumeSpecName "kube-api-access-r6gbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.113880 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6e2a6d5-58be-494c-b034-b5d81da8e46d" (UID: "e6e2a6d5-58be-494c-b034-b5d81da8e46d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.187552 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.187607 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gbt\" (UniqueName: \"kubernetes.io/projected/e6e2a6d5-58be-494c-b034-b5d81da8e46d-kube-api-access-r6gbt\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.187650 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e2a6d5-58be-494c-b034-b5d81da8e46d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.235667 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.288798 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access\") pod \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.289019 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir\") pod \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\" (UID: \"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c\") " Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.289297 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" (UID: "ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.293535 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" (UID: "ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.390505 5023 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.390553 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.857744 5023 generic.go:334] "Generic (PLEG): container finished" podID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerID="4ffabddcc35b30532f58ee7fd852fb540a9bd6dae55d3e6149550ff51dc11cc1" exitCode=0 Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.857840 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerDied","Data":"4ffabddcc35b30532f58ee7fd852fb540a9bd6dae55d3e6149550ff51dc11cc1"} Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.859781 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c","Type":"ContainerDied","Data":"df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206"} Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.859800 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6bd6769b45c43e614824e5ba848d78e1af3be9da4fc106d3a8727a82c52206" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.859842 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.872202 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqrqx" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.872206 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqrqx" event={"ID":"e6e2a6d5-58be-494c-b034-b5d81da8e46d","Type":"ContainerDied","Data":"faf9b91a95c88de746917052d1ca76aab94085ff0cd1ca1d9b633ef939d37c22"} Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.872352 5023 scope.go:117] "RemoveContainer" containerID="96baf770295170fe97057d439aabbbe10bc062f05ec3ae04b675a8e0244fc7c2" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.876389 5023 generic.go:334] "Generic (PLEG): container finished" podID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerID="bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890" exitCode=0 Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.876458 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerDied","Data":"bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890"} Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.890858 5023 scope.go:117] "RemoveContainer" containerID="76aeccb49a51f8d227a09ff865a90352aba7580e4c2655880e1572e1d689ee58" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.910261 5023 scope.go:117] "RemoveContainer" containerID="d63760bac948405178c99e7d65b3e888a4ed3ccfa6e6dda583662bdc54392e77" Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.920734 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:03:59 crc kubenswrapper[5023]: I0219 08:03:59.931274 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqrqx"] Feb 19 08:04:00 crc kubenswrapper[5023]: I0219 08:04:00.885946 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerStarted","Data":"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e"} Feb 19 08:04:00 crc kubenswrapper[5023]: I0219 08:04:00.888438 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerStarted","Data":"9d6b3faf4981e1cfdda319f7342d56e55fdcf2e7259cc5c8a36a365ac608f65b"} Feb 19 08:04:00 crc kubenswrapper[5023]: I0219 08:04:00.907108 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cntt2" podStartSLOduration=2.405899632 podStartE2EDuration="48.90708009s" podCreationTimestamp="2026-02-19 08:03:12 +0000 UTC" firstStartedPulling="2026-02-19 08:03:13.841637959 +0000 UTC m=+151.498756907" lastFinishedPulling="2026-02-19 08:04:00.342818417 +0000 UTC m=+197.999937365" observedRunningTime="2026-02-19 08:04:00.903023709 +0000 UTC m=+198.560142667" watchObservedRunningTime="2026-02-19 08:04:00.90708009 +0000 UTC m=+198.564199068" Feb 19 08:04:00 crc kubenswrapper[5023]: I0219 08:04:00.929958 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q274g" podStartSLOduration=2.536392235 podStartE2EDuration="48.929928336s" podCreationTimestamp="2026-02-19 08:03:12 +0000 UTC" firstStartedPulling="2026-02-19 08:03:13.853797891 +0000 UTC m=+151.510916839" lastFinishedPulling="2026-02-19 08:04:00.247333992 +0000 UTC m=+197.904452940" observedRunningTime="2026-02-19 08:04:00.927003576 +0000 UTC m=+198.584122524" watchObservedRunningTime="2026-02-19 08:04:00.929928336 +0000 UTC m=+198.587047324" Feb 19 08:04:01 crc kubenswrapper[5023]: I0219 08:04:01.490255 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" path="/var/lib/kubelet/pods/e6e2a6d5-58be-494c-b034-b5d81da8e46d/volumes" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.387689 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.387798 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.399678 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:04:02 crc kubenswrapper[5023]: E0219 08:04:02.399995 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="extract-utilities" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400012 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="extract-utilities" Feb 19 08:04:02 crc kubenswrapper[5023]: E0219 08:04:02.400024 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="extract-content" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400032 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="extract-content" Feb 19 08:04:02 crc kubenswrapper[5023]: E0219 08:04:02.400042 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="registry-server" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400051 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="registry-server" Feb 19 08:04:02 crc kubenswrapper[5023]: E0219 08:04:02.400063 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" containerName="pruner" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400071 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" containerName="pruner" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400186 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1401fb-64ea-4c75-a6b6-c9bb0e1fd56c" containerName="pruner" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400203 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e2a6d5-58be-494c-b034-b5d81da8e46d" containerName="registry-server" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.400783 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.413492 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.418349 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.418780 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.458317 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.567876 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.567924 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.567959 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.633792 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.670259 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.670344 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.670396 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.670550 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.670570 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.681502 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.696361 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access\") pod \"installer-9-crc\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.733420 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.774687 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.775162 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:02 crc kubenswrapper[5023]: I0219 08:04:02.838565 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.032577 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.079664 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.144432 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.907704 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"00505d77-b5f5-492f-8c8f-33817b2b0b8c","Type":"ContainerStarted","Data":"689b97f553249a04f7e249032806de06b78f9443ce903c2b2773a52e1855d56a"} Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.908069 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"00505d77-b5f5-492f-8c8f-33817b2b0b8c","Type":"ContainerStarted","Data":"009b566da6f226d938fa987ce0d839dee7273118257b9d90bcf20fdb3890de23"} Feb 19 08:04:03 crc kubenswrapper[5023]: I0219 08:04:03.929721 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.929698627 podStartE2EDuration="1.929698627s" podCreationTimestamp="2026-02-19 08:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:03.926204891 +0000 UTC m=+201.583323839" watchObservedRunningTime="2026-02-19 08:04:03.929698627 +0000 UTC m=+201.586817575" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.020315 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.020614 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8sp47" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="registry-server" containerID="cri-o://b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67" gracePeriod=2 Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.448663 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.614243 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content\") pod \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.614303 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzc7n\" (UniqueName: \"kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n\") pod \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.614360 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities\") pod \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\" (UID: \"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b\") " Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.615311 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities" (OuterVolumeSpecName: "utilities") pod "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" (UID: "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.615720 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.620813 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n" (OuterVolumeSpecName: "kube-api-access-tzc7n") pod "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" (UID: "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b"). InnerVolumeSpecName "kube-api-access-tzc7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.664375 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" (UID: "ea51e5bd-2947-47e5-b780-e5dbbd82bd5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.717138 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.717174 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzc7n\" (UniqueName: \"kubernetes.io/projected/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b-kube-api-access-tzc7n\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.736772 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.802402 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.934482 5023 generic.go:334] "Generic (PLEG): container finished" podID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerID="b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67" exitCode=0 Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.934547 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sp47" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.934641 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerDied","Data":"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67"} Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.934746 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sp47" event={"ID":"ea51e5bd-2947-47e5-b780-e5dbbd82bd5b","Type":"ContainerDied","Data":"be49261beb984fe665781bc69d7464ccf757b717672af80c0365e1d94904f2d7"} Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.934787 5023 scope.go:117] "RemoveContainer" containerID="b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.952072 5023 scope.go:117] "RemoveContainer" containerID="3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18" Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.968183 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.970517 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8sp47"] Feb 19 08:04:05 crc kubenswrapper[5023]: I0219 08:04:05.984030 5023 scope.go:117] "RemoveContainer" containerID="635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.002740 5023 scope.go:117] "RemoveContainer" containerID="b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67" Feb 19 08:04:06 crc kubenswrapper[5023]: E0219 08:04:06.003293 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67\": container with ID starting with b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67 not found: ID does not exist" containerID="b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.003325 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67"} err="failed to get container status \"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67\": rpc error: code = NotFound desc = could not find container \"b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67\": container with ID starting with b92a00ffb80641a86606c4b8ea199c7d000bc5a95c0f4db37fe48c4083c60c67 not found: ID does not exist" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.003352 5023 scope.go:117] "RemoveContainer" containerID="3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18" Feb 19 08:04:06 crc kubenswrapper[5023]: E0219 08:04:06.003732 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18\": container with ID starting with 3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18 not found: ID does not exist" containerID="3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.003765 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18"} err="failed to get container status \"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18\": rpc error: code = NotFound desc = could not find container \"3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18\": container with ID starting with 3fc497ffe5f1374766e8f854c2f54f81b62f6018fe46d876ae40a0d707863a18 not found: ID does not exist" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.003784 5023 scope.go:117] "RemoveContainer" containerID="635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6" Feb 19 08:04:06 crc kubenswrapper[5023]: E0219 08:04:06.004031 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6\": container with ID starting with 635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6 not found: ID does not exist" containerID="635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.004052 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6"} err="failed to get container status \"635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6\": rpc error: code = NotFound desc = could not find container \"635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6\": container with ID starting with 635e08d727ec529513e4bc58189bea98649559ec48d0c20ea4402f34f18741f6 not found: ID does not exist" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.080250 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:04:06 crc kubenswrapper[5023]: I0219 08:04:06.125084 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:04:07 crc kubenswrapper[5023]: I0219 08:04:07.501922 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" path="/var/lib/kubelet/pods/ea51e5bd-2947-47e5-b780-e5dbbd82bd5b/volumes" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.068252 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.068576 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerName="controller-manager" containerID="cri-o://fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4" gracePeriod=30 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.098044 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.098710 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" podUID="111e1778-755c-459f-9e15-be059572c236" containerName="route-controller-manager" containerID="cri-o://fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86" gracePeriod=30 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.420724 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.421228 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbcq7" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="registry-server" containerID="cri-o://d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908" gracePeriod=2 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.578639 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.674248 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config\") pod \"111e1778-755c-459f-9e15-be059572c236\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.674431 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvxtp\" (UniqueName: \"kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp\") pod \"111e1778-755c-459f-9e15-be059572c236\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.675431 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config" (OuterVolumeSpecName: "config") pod "111e1778-755c-459f-9e15-be059572c236" (UID: "111e1778-755c-459f-9e15-be059572c236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.676006 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert\") pod \"111e1778-755c-459f-9e15-be059572c236\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.676116 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca\") pod \"111e1778-755c-459f-9e15-be059572c236\" (UID: \"111e1778-755c-459f-9e15-be059572c236\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.676484 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.677308 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca" (OuterVolumeSpecName: "client-ca") pod "111e1778-755c-459f-9e15-be059572c236" (UID: "111e1778-755c-459f-9e15-be059572c236"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.681934 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "111e1778-755c-459f-9e15-be059572c236" (UID: "111e1778-755c-459f-9e15-be059572c236"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.682972 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp" (OuterVolumeSpecName: "kube-api-access-mvxtp") pod "111e1778-755c-459f-9e15-be059572c236" (UID: "111e1778-755c-459f-9e15-be059572c236"). InnerVolumeSpecName "kube-api-access-mvxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.783601 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvxtp\" (UniqueName: \"kubernetes.io/projected/111e1778-755c-459f-9e15-be059572c236-kube-api-access-mvxtp\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.783692 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/111e1778-755c-459f-9e15-be059572c236-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.783722 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/111e1778-755c-459f-9e15-be059572c236-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.789740 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.798490 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884545 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config\") pod \"a2e6430c-b6e4-4df2-9906-016da16b3646\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884608 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles\") pod \"a2e6430c-b6e4-4df2-9906-016da16b3646\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884656 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca\") pod \"a2e6430c-b6e4-4df2-9906-016da16b3646\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884803 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities\") pod \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884846 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt54m\" (UniqueName: \"kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m\") pod \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884926 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6lvz\" (UniqueName: \"kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz\") pod \"a2e6430c-b6e4-4df2-9906-016da16b3646\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.884971 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert\") pod \"a2e6430c-b6e4-4df2-9906-016da16b3646\" (UID: \"a2e6430c-b6e4-4df2-9906-016da16b3646\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.885050 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content\") pod \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\" (UID: \"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59\") " Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.885653 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a2e6430c-b6e4-4df2-9906-016da16b3646" (UID: "a2e6430c-b6e4-4df2-9906-016da16b3646"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.885823 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities" (OuterVolumeSpecName: "utilities") pod "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" (UID: "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.885832 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca" (OuterVolumeSpecName: "client-ca") pod "a2e6430c-b6e4-4df2-9906-016da16b3646" (UID: "a2e6430c-b6e4-4df2-9906-016da16b3646"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.886192 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config" (OuterVolumeSpecName: "config") pod "a2e6430c-b6e4-4df2-9906-016da16b3646" (UID: "a2e6430c-b6e4-4df2-9906-016da16b3646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.888536 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m" (OuterVolumeSpecName: "kube-api-access-tt54m") pod "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" (UID: "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59"). InnerVolumeSpecName "kube-api-access-tt54m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.888613 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz" (OuterVolumeSpecName: "kube-api-access-r6lvz") pod "a2e6430c-b6e4-4df2-9906-016da16b3646" (UID: "a2e6430c-b6e4-4df2-9906-016da16b3646"). InnerVolumeSpecName "kube-api-access-r6lvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.889169 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2e6430c-b6e4-4df2-9906-016da16b3646" (UID: "a2e6430c-b6e4-4df2-9906-016da16b3646"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.963050 5023 generic.go:334] "Generic (PLEG): container finished" podID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerID="fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4" exitCode=0 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.963105 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" event={"ID":"a2e6430c-b6e4-4df2-9906-016da16b3646","Type":"ContainerDied","Data":"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.963131 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.963150 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb" event={"ID":"a2e6430c-b6e4-4df2-9906-016da16b3646","Type":"ContainerDied","Data":"618c742e21282f59d4466b6fb48413e9dc3ac3aa7d2f2c919490e61299f6eef2"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.963171 5023 scope.go:117] "RemoveContainer" containerID="fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.970924 5023 generic.go:334] "Generic (PLEG): container finished" podID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerID="d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908" exitCode=0 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.971153 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbcq7" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.971351 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerDied","Data":"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.971404 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbcq7" event={"ID":"6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59","Type":"ContainerDied","Data":"6c0ec452e11f69b02edf32b6740b068bb36e6e986fde643265800891af345d0f"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.974188 5023 generic.go:334] "Generic (PLEG): container finished" podID="111e1778-755c-459f-9e15-be059572c236" containerID="fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86" exitCode=0 Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.974215 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" event={"ID":"111e1778-755c-459f-9e15-be059572c236","Type":"ContainerDied","Data":"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.974364 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" event={"ID":"111e1778-755c-459f-9e15-be059572c236","Type":"ContainerDied","Data":"1f4d0975749f9920424b21bf6909bd86b08e01134ef21d69e9325d0500e71e64"} Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.974428 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.987792 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6lvz\" (UniqueName: \"kubernetes.io/projected/a2e6430c-b6e4-4df2-9906-016da16b3646-kube-api-access-r6lvz\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.987818 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e6430c-b6e4-4df2-9906-016da16b3646-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.987950 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.987993 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.988006 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e6430c-b6e4-4df2-9906-016da16b3646-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.988061 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt54m\" (UniqueName: \"kubernetes.io/projected/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-kube-api-access-tt54m\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.988200 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.994469 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.994662 5023 scope.go:117] "RemoveContainer" containerID="fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4" Feb 19 08:04:09 crc kubenswrapper[5023]: E0219 08:04:09.995301 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4\": container with ID starting with fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4 not found: ID does not exist" containerID="fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.995343 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4"} err="failed to get container status \"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4\": rpc error: code = NotFound desc = could not find container \"fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4\": container with ID starting with fbf87670cc3d921a10a9c0b30b36b1100778683bf97bb827b9b7ef0b3b58b2c4 not found: ID does not exist" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.995370 5023 scope.go:117] "RemoveContainer" containerID="d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908" Feb 19 08:04:09 crc kubenswrapper[5023]: I0219 08:04:09.998152 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5687fb4dcf-j2lzb"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.009643 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.012581 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b8cc89779-d4878"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.014482 5023 scope.go:117] "RemoveContainer" containerID="bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.032658 5023 scope.go:117] "RemoveContainer" containerID="9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.038340 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" (UID: "6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.046958 5023 scope.go:117] "RemoveContainer" containerID="d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.047367 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908\": container with ID starting with d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908 not found: ID does not exist" containerID="d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.047408 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908"} err="failed to get container status \"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908\": rpc error: code = NotFound desc = could not find container \"d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908\": container with ID starting with d6355845774ba97dbeeb1056b932ffcb7357c8c8cef723f22f1a7cc4eacee908 not found: ID does not exist" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.047452 5023 scope.go:117] "RemoveContainer" containerID="bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.048043 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8\": container with ID starting with bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8 not found: ID does not exist" containerID="bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.048063 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8"} err="failed to get container status \"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8\": rpc error: code = NotFound desc = could not find container \"bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8\": container with ID starting with bf6e344b61398b083c473670e7668e17ba6a26f06a075f321a686020383455c8 not found: ID does not exist" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.048081 5023 scope.go:117] "RemoveContainer" containerID="9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.048316 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a\": container with ID starting with 9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a not found: ID does not exist" containerID="9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.048335 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a"} err="failed to get container status \"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a\": rpc error: code = NotFound desc = could not find container \"9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a\": container with ID starting with 9ea4ecaed7208a557c67d302ded4749c7d183986bbae708232910352dc05762a not found: ID does not exist" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.048353 5023 scope.go:117] "RemoveContainer" containerID="fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.061160 5023 scope.go:117] "RemoveContainer" containerID="fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.061498 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86\": container with ID starting with fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86 not found: ID does not exist" containerID="fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.061527 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86"} err="failed to get container status \"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86\": rpc error: code = NotFound desc = could not find container \"fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86\": container with ID starting with fe0ab70882b2ef478e0e4041c579c6c02feeac4fcc393f375b565035b5b40e86 not found: ID does not exist" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.089449 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.311950 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.315786 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbcq7"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636224 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636514 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="extract-utilities" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636530 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="extract-utilities" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636549 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111e1778-755c-459f-9e15-be059572c236" containerName="route-controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636555 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="111e1778-755c-459f-9e15-be059572c236" containerName="route-controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636568 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="extract-content" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636574 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="extract-content" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636583 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636588 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636597 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerName="controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636603 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerName="controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636611 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636634 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636645 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="extract-content" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636650 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="extract-content" Feb 19 08:04:10 crc kubenswrapper[5023]: E0219 08:04:10.636659 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="extract-utilities" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.636665 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="extract-utilities" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.645477 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea51e5bd-2947-47e5-b780-e5dbbd82bd5b" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.645538 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" containerName="controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.645557 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" containerName="registry-server" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.645575 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="111e1778-755c-459f-9e15-be059572c236" containerName="route-controller-manager" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.646179 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.647855 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.648002 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.659822 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.659950 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.660341 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662142 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662264 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662395 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662441 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662580 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.662801 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.663052 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.663675 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.664003 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.666115 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.667283 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.672508 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.798975 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799106 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799187 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799457 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799516 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799544 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84c54\" (UniqueName: \"kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799610 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799682 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.799704 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9fg\" (UniqueName: \"kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901214 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901704 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901739 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84c54\" (UniqueName: \"kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901788 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901830 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901859 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9fg\" (UniqueName: \"kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901915 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.901963 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.902007 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.902687 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.903204 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.903538 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.904498 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.904769 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.908219 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.908237 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.924037 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84c54\" (UniqueName: \"kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54\") pod \"route-controller-manager-676d6c485d-g7r4q\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:10 crc kubenswrapper[5023]: I0219 08:04:10.933545 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9fg\" (UniqueName: \"kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg\") pod \"controller-manager-85b68bb498-dt9ll\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.008542 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.026555 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.256097 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:11 crc kubenswrapper[5023]: W0219 08:04:11.261608 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88d52b19_178a_4d00_9347_2231f39cb2a6.slice/crio-b4230b80f5a07854acc44fbfe2a40e58073a7636e90194321d9c35db579dfacc WatchSource:0}: Error finding container b4230b80f5a07854acc44fbfe2a40e58073a7636e90194321d9c35db579dfacc: Status 404 returned error can't find the container with id b4230b80f5a07854acc44fbfe2a40e58073a7636e90194321d9c35db579dfacc Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.278188 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:11 crc kubenswrapper[5023]: W0219 08:04:11.289270 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e3e4553_a36b_4cb3_9e65_ac738dd29bc4.slice/crio-f4bb4c0951c3cfc5f8ac23648d5af237ce2ee76d4b5ea1a93af96aa5eeb3f3f4 WatchSource:0}: Error finding container f4bb4c0951c3cfc5f8ac23648d5af237ce2ee76d4b5ea1a93af96aa5eeb3f3f4: Status 404 returned error can't find the container with id f4bb4c0951c3cfc5f8ac23648d5af237ce2ee76d4b5ea1a93af96aa5eeb3f3f4 Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.485896 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111e1778-755c-459f-9e15-be059572c236" path="/var/lib/kubelet/pods/111e1778-755c-459f-9e15-be059572c236/volumes" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.486769 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59" path="/var/lib/kubelet/pods/6eb40e9a-c3d2-47ae-b59d-9d69bbd2cf59/volumes" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.487680 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e6430c-b6e4-4df2-9906-016da16b3646" path="/var/lib/kubelet/pods/a2e6430c-b6e4-4df2-9906-016da16b3646/volumes" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.870456 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.870763 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.870870 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.871506 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.871673 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676" gracePeriod=600 Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.994592 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" event={"ID":"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4","Type":"ContainerStarted","Data":"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448"} Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.994961 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" event={"ID":"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4","Type":"ContainerStarted","Data":"f4bb4c0951c3cfc5f8ac23648d5af237ce2ee76d4b5ea1a93af96aa5eeb3f3f4"} Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.995403 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.996702 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" event={"ID":"88d52b19-178a-4d00-9347-2231f39cb2a6","Type":"ContainerStarted","Data":"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed"} Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.996726 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" event={"ID":"88d52b19-178a-4d00-9347-2231f39cb2a6","Type":"ContainerStarted","Data":"b4230b80f5a07854acc44fbfe2a40e58073a7636e90194321d9c35db579dfacc"} Feb 19 08:04:11 crc kubenswrapper[5023]: I0219 08:04:11.997111 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.000954 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.015554 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" podStartSLOduration=3.015537544 podStartE2EDuration="3.015537544s" podCreationTimestamp="2026-02-19 08:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:12.013594841 +0000 UTC m=+209.670713809" watchObservedRunningTime="2026-02-19 08:04:12.015537544 +0000 UTC m=+209.672656482" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.036515 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" podStartSLOduration=3.036490358 podStartE2EDuration="3.036490358s" podCreationTimestamp="2026-02-19 08:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:12.031028898 +0000 UTC m=+209.688147866" watchObservedRunningTime="2026-02-19 08:04:12.036490358 +0000 UTC m=+209.693609326" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.073244 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.439063 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:04:12 crc kubenswrapper[5023]: I0219 08:04:12.810205 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:13 crc kubenswrapper[5023]: I0219 08:04:13.019788 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676" exitCode=0 Feb 19 08:04:13 crc kubenswrapper[5023]: I0219 08:04:13.019893 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676"} Feb 19 08:04:13 crc kubenswrapper[5023]: I0219 08:04:13.020022 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1"} Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.419025 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.419411 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cntt2" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="registry-server" containerID="cri-o://dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e" gracePeriod=2 Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.856002 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.961297 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content\") pod \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.961509 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities\") pod \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.961534 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbzpc\" (UniqueName: \"kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc\") pod \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\" (UID: \"6c8fc31b-73c3-4a18-bf3a-d684464c7625\") " Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.962743 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities" (OuterVolumeSpecName: "utilities") pod "6c8fc31b-73c3-4a18-bf3a-d684464c7625" (UID: "6c8fc31b-73c3-4a18-bf3a-d684464c7625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:14 crc kubenswrapper[5023]: I0219 08:04:14.967430 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc" (OuterVolumeSpecName: "kube-api-access-lbzpc") pod "6c8fc31b-73c3-4a18-bf3a-d684464c7625" (UID: "6c8fc31b-73c3-4a18-bf3a-d684464c7625"). InnerVolumeSpecName "kube-api-access-lbzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.012958 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c8fc31b-73c3-4a18-bf3a-d684464c7625" (UID: "6c8fc31b-73c3-4a18-bf3a-d684464c7625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.041861 5023 generic.go:334] "Generic (PLEG): container finished" podID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerID="dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e" exitCode=0 Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.041938 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerDied","Data":"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e"} Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.042014 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cntt2" event={"ID":"6c8fc31b-73c3-4a18-bf3a-d684464c7625","Type":"ContainerDied","Data":"11c7af989449e75ceadd542831eca611c0d2e4021d58132f60e8a0805e888842"} Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.042044 5023 scope.go:117] "RemoveContainer" containerID="dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.042581 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cntt2" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.060389 5023 scope.go:117] "RemoveContainer" containerID="bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.062891 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.062919 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbzpc\" (UniqueName: \"kubernetes.io/projected/6c8fc31b-73c3-4a18-bf3a-d684464c7625-kube-api-access-lbzpc\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.062928 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8fc31b-73c3-4a18-bf3a-d684464c7625-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.081607 5023 scope.go:117] "RemoveContainer" containerID="986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.104866 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.107467 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cntt2"] Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.111835 5023 scope.go:117] "RemoveContainer" containerID="dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e" Feb 19 08:04:15 crc kubenswrapper[5023]: E0219 08:04:15.112283 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e\": container with ID starting with dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e not found: ID does not exist" containerID="dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.112343 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e"} err="failed to get container status \"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e\": rpc error: code = NotFound desc = could not find container \"dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e\": container with ID starting with dd35ebd764bad08dba727dac3a095ede11dd9e074872db80306aa6aec402819e not found: ID does not exist" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.112373 5023 scope.go:117] "RemoveContainer" containerID="bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890" Feb 19 08:04:15 crc kubenswrapper[5023]: E0219 08:04:15.112788 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890\": container with ID starting with bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890 not found: ID does not exist" containerID="bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.112817 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890"} err="failed to get container status \"bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890\": rpc error: code = NotFound desc = could not find container \"bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890\": container with ID starting with bf136c32d36ba9bdd8aa4b83dd9948f8c81f3c0596a214b83e3f2039485a3890 not found: ID does not exist" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.112839 5023 scope.go:117] "RemoveContainer" containerID="986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66" Feb 19 08:04:15 crc kubenswrapper[5023]: E0219 08:04:15.113042 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66\": container with ID starting with 986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66 not found: ID does not exist" containerID="986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.113063 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66"} err="failed to get container status \"986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66\": rpc error: code = NotFound desc = could not find container \"986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66\": container with ID starting with 986242a2ae56e3fca1b35cdb4bbbe1e31e16c90fe90803fd3dd3e04d7050ff66 not found: ID does not exist" Feb 19 08:04:15 crc kubenswrapper[5023]: I0219 08:04:15.486768 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" path="/var/lib/kubelet/pods/6c8fc31b-73c3-4a18-bf3a-d684464c7625/volumes" Feb 19 08:04:18 crc kubenswrapper[5023]: I0219 08:04:18.537775 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" podUID="ac2444b2-3e6c-4704-b065-abf105add63c" containerName="oauth-openshift" containerID="cri-o://4181226703db5e57deb6948468a4142403311d56e86d23ec5269b627489ed360" gracePeriod=15 Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.073437 5023 generic.go:334] "Generic (PLEG): container finished" podID="ac2444b2-3e6c-4704-b065-abf105add63c" containerID="4181226703db5e57deb6948468a4142403311d56e86d23ec5269b627489ed360" exitCode=0 Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.073606 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" event={"ID":"ac2444b2-3e6c-4704-b065-abf105add63c","Type":"ContainerDied","Data":"4181226703db5e57deb6948468a4142403311d56e86d23ec5269b627489ed360"} Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.074017 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" event={"ID":"ac2444b2-3e6c-4704-b065-abf105add63c","Type":"ContainerDied","Data":"cf16e95222c0de8dfdfc859f44d16adf519214e5efd05429c5c08dfb96a29c3d"} Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.074044 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf16e95222c0de8dfdfc859f44d16adf519214e5efd05429c5c08dfb96a29c3d" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.089001 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228613 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w97wh\" (UniqueName: \"kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228714 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228788 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228838 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228862 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228893 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228929 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228954 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.228997 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229040 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229063 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229090 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229116 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229141 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir\") pod \"ac2444b2-3e6c-4704-b065-abf105add63c\" (UID: \"ac2444b2-3e6c-4704-b065-abf105add63c\") " Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.229434 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.230929 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.230993 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.231008 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.231046 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.236272 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.236400 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh" (OuterVolumeSpecName: "kube-api-access-w97wh") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "kube-api-access-w97wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.236552 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.236956 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.237388 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.237675 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.238254 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.238674 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.238829 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ac2444b2-3e6c-4704-b065-abf105add63c" (UID: "ac2444b2-3e6c-4704-b065-abf105add63c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330885 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330937 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330952 5023 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330969 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330983 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.330996 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331010 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331023 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331038 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331054 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331068 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331084 5023 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac2444b2-3e6c-4704-b065-abf105add63c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331097 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w97wh\" (UniqueName: \"kubernetes.io/projected/ac2444b2-3e6c-4704-b065-abf105add63c-kube-api-access-w97wh\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:19 crc kubenswrapper[5023]: I0219 08:04:19.331109 5023 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac2444b2-3e6c-4704-b065-abf105add63c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:20 crc kubenswrapper[5023]: I0219 08:04:20.079267 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xn8fp" Feb 19 08:04:20 crc kubenswrapper[5023]: I0219 08:04:20.099983 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:04:20 crc kubenswrapper[5023]: I0219 08:04:20.106052 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xn8fp"] Feb 19 08:04:21 crc kubenswrapper[5023]: I0219 08:04:21.487977 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2444b2-3e6c-4704-b065-abf105add63c" path="/var/lib/kubelet/pods/ac2444b2-3e6c-4704-b065-abf105add63c/volumes" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.642461 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6"] Feb 19 08:04:23 crc kubenswrapper[5023]: E0219 08:04:23.643035 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="extract-content" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643052 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="extract-content" Feb 19 08:04:23 crc kubenswrapper[5023]: E0219 08:04:23.643072 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2444b2-3e6c-4704-b065-abf105add63c" containerName="oauth-openshift" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643080 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2444b2-3e6c-4704-b065-abf105add63c" containerName="oauth-openshift" Feb 19 08:04:23 crc kubenswrapper[5023]: E0219 08:04:23.643091 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="extract-utilities" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643099 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="extract-utilities" Feb 19 08:04:23 crc kubenswrapper[5023]: E0219 08:04:23.643115 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="registry-server" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643124 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="registry-server" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643234 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c8fc31b-73c3-4a18-bf3a-d684464c7625" containerName="registry-server" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643266 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2444b2-3e6c-4704-b065-abf105add63c" containerName="oauth-openshift" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.643710 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.646457 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.649016 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.649313 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.649474 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650126 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650280 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650465 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650509 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650670 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650810 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.650922 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.651885 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.675028 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.677640 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.692236 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6"] Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.695069 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.794047 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.794453 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.794604 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.794759 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-audit-policies\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.794896 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795013 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795131 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795259 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795550 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795686 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795813 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1882a732-b129-4373-9602-ef10efac8a6a-audit-dir\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.795924 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfwm\" (UniqueName: \"kubernetes.io/projected/1882a732-b129-4373-9602-ef10efac8a6a-kube-api-access-zrfwm\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.796051 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.796169 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897575 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897676 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897717 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897764 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897810 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897856 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-audit-policies\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897898 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897930 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897955 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.897977 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.898010 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.898045 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.898082 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1882a732-b129-4373-9602-ef10efac8a6a-audit-dir\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.898109 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfwm\" (UniqueName: \"kubernetes.io/projected/1882a732-b129-4373-9602-ef10efac8a6a-kube-api-access-zrfwm\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.898861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1882a732-b129-4373-9602-ef10efac8a6a-audit-dir\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.899589 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.899881 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.900338 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-audit-policies\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.900714 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911356 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911375 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911500 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911685 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911805 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911798 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.911842 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.914298 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1882a732-b129-4373-9602-ef10efac8a6a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.918549 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfwm\" (UniqueName: \"kubernetes.io/projected/1882a732-b129-4373-9602-ef10efac8a6a-kube-api-access-zrfwm\") pod \"oauth-openshift-6447dfb5d9-j2nd6\" (UID: \"1882a732-b129-4373-9602-ef10efac8a6a\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:23 crc kubenswrapper[5023]: I0219 08:04:23.971747 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:24 crc kubenswrapper[5023]: I0219 08:04:24.521849 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6"] Feb 19 08:04:24 crc kubenswrapper[5023]: W0219 08:04:24.528382 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1882a732_b129_4373_9602_ef10efac8a6a.slice/crio-a0b430c6ec54bfec84d58a1932bedd47a0cacd78ef1b61eec4a580ce1ea3e6b0 WatchSource:0}: Error finding container a0b430c6ec54bfec84d58a1932bedd47a0cacd78ef1b61eec4a580ce1ea3e6b0: Status 404 returned error can't find the container with id a0b430c6ec54bfec84d58a1932bedd47a0cacd78ef1b61eec4a580ce1ea3e6b0 Feb 19 08:04:25 crc kubenswrapper[5023]: I0219 08:04:25.119035 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" event={"ID":"1882a732-b129-4373-9602-ef10efac8a6a","Type":"ContainerStarted","Data":"f0d65e5a4d1890d1f8981656099ad8b8afdd244b23484329256fbc0b72172e73"} Feb 19 08:04:25 crc kubenswrapper[5023]: I0219 08:04:25.119364 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" event={"ID":"1882a732-b129-4373-9602-ef10efac8a6a","Type":"ContainerStarted","Data":"a0b430c6ec54bfec84d58a1932bedd47a0cacd78ef1b61eec4a580ce1ea3e6b0"} Feb 19 08:04:25 crc kubenswrapper[5023]: I0219 08:04:25.119536 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:25 crc kubenswrapper[5023]: I0219 08:04:25.150109 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" podStartSLOduration=32.150066644 podStartE2EDuration="32.150066644s" podCreationTimestamp="2026-02-19 08:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:25.142065695 +0000 UTC m=+222.799184643" watchObservedRunningTime="2026-02-19 08:04:25.150066644 +0000 UTC m=+222.807185632" Feb 19 08:04:25 crc kubenswrapper[5023]: I0219 08:04:25.187125 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6447dfb5d9-j2nd6" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.060786 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.061848 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" podUID="88d52b19-178a-4d00-9347-2231f39cb2a6" containerName="controller-manager" containerID="cri-o://a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed" gracePeriod=30 Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.156960 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.157199 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" podUID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" containerName="route-controller-manager" containerID="cri-o://d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448" gracePeriod=30 Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.647602 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.653045 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781281 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca\") pod \"88d52b19-178a-4d00-9347-2231f39cb2a6\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781680 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles\") pod \"88d52b19-178a-4d00-9347-2231f39cb2a6\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781721 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert\") pod \"88d52b19-178a-4d00-9347-2231f39cb2a6\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781748 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9fg\" (UniqueName: \"kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg\") pod \"88d52b19-178a-4d00-9347-2231f39cb2a6\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781797 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca\") pod \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781825 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config\") pod \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781853 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert\") pod \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781874 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84c54\" (UniqueName: \"kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54\") pod \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\" (UID: \"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.781897 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config\") pod \"88d52b19-178a-4d00-9347-2231f39cb2a6\" (UID: \"88d52b19-178a-4d00-9347-2231f39cb2a6\") " Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.783150 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config" (OuterVolumeSpecName: "config") pod "88d52b19-178a-4d00-9347-2231f39cb2a6" (UID: "88d52b19-178a-4d00-9347-2231f39cb2a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.784028 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "88d52b19-178a-4d00-9347-2231f39cb2a6" (UID: "88d52b19-178a-4d00-9347-2231f39cb2a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.784780 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88d52b19-178a-4d00-9347-2231f39cb2a6" (UID: "88d52b19-178a-4d00-9347-2231f39cb2a6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.784996 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" (UID: "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.785301 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config" (OuterVolumeSpecName: "config") pod "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" (UID: "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.794955 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" (UID: "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.795025 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88d52b19-178a-4d00-9347-2231f39cb2a6" (UID: "88d52b19-178a-4d00-9347-2231f39cb2a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.795032 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg" (OuterVolumeSpecName: "kube-api-access-nm9fg") pod "88d52b19-178a-4d00-9347-2231f39cb2a6" (UID: "88d52b19-178a-4d00-9347-2231f39cb2a6"). InnerVolumeSpecName "kube-api-access-nm9fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.808099 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54" (OuterVolumeSpecName: "kube-api-access-84c54") pod "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" (UID: "2e3e4553-a36b-4cb3-9e65-ac738dd29bc4"). InnerVolumeSpecName "kube-api-access-84c54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883478 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883529 5023 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883542 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d52b19-178a-4d00-9347-2231f39cb2a6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883553 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm9fg\" (UniqueName: \"kubernetes.io/projected/88d52b19-178a-4d00-9347-2231f39cb2a6-kube-api-access-nm9fg\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883565 5023 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883574 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883583 5023 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883592 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84c54\" (UniqueName: \"kubernetes.io/projected/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4-kube-api-access-84c54\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:29 crc kubenswrapper[5023]: I0219 08:04:29.883602 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d52b19-178a-4d00-9347-2231f39cb2a6-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.155359 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" event={"ID":"88d52b19-178a-4d00-9347-2231f39cb2a6","Type":"ContainerDied","Data":"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed"} Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.155396 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.156521 5023 scope.go:117] "RemoveContainer" containerID="a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.155283 5023 generic.go:334] "Generic (PLEG): container finished" podID="88d52b19-178a-4d00-9347-2231f39cb2a6" containerID="a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed" exitCode=0 Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.157152 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85b68bb498-dt9ll" event={"ID":"88d52b19-178a-4d00-9347-2231f39cb2a6","Type":"ContainerDied","Data":"b4230b80f5a07854acc44fbfe2a40e58073a7636e90194321d9c35db579dfacc"} Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.161021 5023 generic.go:334] "Generic (PLEG): container finished" podID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" containerID="d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448" exitCode=0 Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.161283 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" event={"ID":"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4","Type":"ContainerDied","Data":"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448"} Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.161358 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" event={"ID":"2e3e4553-a36b-4cb3-9e65-ac738dd29bc4","Type":"ContainerDied","Data":"f4bb4c0951c3cfc5f8ac23648d5af237ce2ee76d4b5ea1a93af96aa5eeb3f3f4"} Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.161496 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.175278 5023 scope.go:117] "RemoveContainer" containerID="a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed" Feb 19 08:04:30 crc kubenswrapper[5023]: E0219 08:04:30.176036 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed\": container with ID starting with a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed not found: ID does not exist" containerID="a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.176250 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed"} err="failed to get container status \"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed\": rpc error: code = NotFound desc = could not find container \"a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed\": container with ID starting with a17f05de6d764c85153b54b51384e4d1ab3c05c8da1d7c6ee7c6a51fb7031eed not found: ID does not exist" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.176398 5023 scope.go:117] "RemoveContainer" containerID="d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.200742 5023 scope.go:117] "RemoveContainer" containerID="d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448" Feb 19 08:04:30 crc kubenswrapper[5023]: E0219 08:04:30.201435 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448\": container with ID starting with d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448 not found: ID does not exist" containerID="d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.201468 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448"} err="failed to get container status \"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448\": rpc error: code = NotFound desc = could not find container \"d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448\": container with ID starting with d7d5d47e8eba947ce6796d7ec5388b8f5731c81056b0f9eb1479c07dfd80c448 not found: ID does not exist" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.209495 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.216342 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676d6c485d-g7r4q"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.229240 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.231812 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85b68bb498-dt9ll"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.650825 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5"] Feb 19 08:04:30 crc kubenswrapper[5023]: E0219 08:04:30.651196 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" containerName="route-controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.651220 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" containerName="route-controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: E0219 08:04:30.651256 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d52b19-178a-4d00-9347-2231f39cb2a6" containerName="controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.651266 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d52b19-178a-4d00-9347-2231f39cb2a6" containerName="controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.651418 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" containerName="route-controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.651443 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d52b19-178a-4d00-9347-2231f39cb2a6" containerName="controller-manager" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.652181 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.654150 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.655200 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.656261 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.656406 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.656523 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.656674 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.657187 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.657595 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.657927 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.658532 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.659376 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.659413 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.661174 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.661247 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.672712 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.677100 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.680307 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns"] Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799310 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46744a76-5ba5-4173-ac29-5dc1f2b65954-serving-cert\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799368 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf82987-f865-4058-8097-db10c1aa2241-serving-cert\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799399 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ng7m\" (UniqueName: \"kubernetes.io/projected/46744a76-5ba5-4173-ac29-5dc1f2b65954-kube-api-access-8ng7m\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799458 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-client-ca\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799506 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-client-ca\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799533 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfwz\" (UniqueName: \"kubernetes.io/projected/abf82987-f865-4058-8097-db10c1aa2241-kube-api-access-8tfwz\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799553 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-config\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799572 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-config\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.799587 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-proxy-ca-bundles\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.902762 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-client-ca\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.901536 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-client-ca\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.902907 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-client-ca\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.903745 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-client-ca\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.903799 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfwz\" (UniqueName: \"kubernetes.io/projected/abf82987-f865-4058-8097-db10c1aa2241-kube-api-access-8tfwz\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.903891 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-config\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.903938 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-config\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.903962 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-proxy-ca-bundles\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.904037 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46744a76-5ba5-4173-ac29-5dc1f2b65954-serving-cert\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.904089 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf82987-f865-4058-8097-db10c1aa2241-serving-cert\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.904162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ng7m\" (UniqueName: \"kubernetes.io/projected/46744a76-5ba5-4173-ac29-5dc1f2b65954-kube-api-access-8ng7m\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.905308 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-proxy-ca-bundles\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.906098 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf82987-f865-4058-8097-db10c1aa2241-config\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.906324 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46744a76-5ba5-4173-ac29-5dc1f2b65954-config\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.911785 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abf82987-f865-4058-8097-db10c1aa2241-serving-cert\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.916736 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46744a76-5ba5-4173-ac29-5dc1f2b65954-serving-cert\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.926697 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfwz\" (UniqueName: \"kubernetes.io/projected/abf82987-f865-4058-8097-db10c1aa2241-kube-api-access-8tfwz\") pod \"route-controller-manager-577cbfcd68-n4zns\" (UID: \"abf82987-f865-4058-8097-db10c1aa2241\") " pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.927560 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ng7m\" (UniqueName: \"kubernetes.io/projected/46744a76-5ba5-4173-ac29-5dc1f2b65954-kube-api-access-8ng7m\") pod \"controller-manager-bf8b6c6fc-8jpg5\" (UID: \"46744a76-5ba5-4173-ac29-5dc1f2b65954\") " pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.970957 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:30 crc kubenswrapper[5023]: I0219 08:04:30.982827 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:31 crc kubenswrapper[5023]: I0219 08:04:31.397593 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5"] Feb 19 08:04:31 crc kubenswrapper[5023]: W0219 08:04:31.406468 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46744a76_5ba5_4173_ac29_5dc1f2b65954.slice/crio-95d7373d2ecd548782b5f2606baaa5d2a53621c3f82aa1fd1f195adbd307dddc WatchSource:0}: Error finding container 95d7373d2ecd548782b5f2606baaa5d2a53621c3f82aa1fd1f195adbd307dddc: Status 404 returned error can't find the container with id 95d7373d2ecd548782b5f2606baaa5d2a53621c3f82aa1fd1f195adbd307dddc Feb 19 08:04:31 crc kubenswrapper[5023]: I0219 08:04:31.484112 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3e4553-a36b-4cb3-9e65-ac738dd29bc4" path="/var/lib/kubelet/pods/2e3e4553-a36b-4cb3-9e65-ac738dd29bc4/volumes" Feb 19 08:04:31 crc kubenswrapper[5023]: I0219 08:04:31.485384 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d52b19-178a-4d00-9347-2231f39cb2a6" path="/var/lib/kubelet/pods/88d52b19-178a-4d00-9347-2231f39cb2a6/volumes" Feb 19 08:04:31 crc kubenswrapper[5023]: I0219 08:04:31.489497 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns"] Feb 19 08:04:31 crc kubenswrapper[5023]: W0219 08:04:31.500007 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabf82987_f865_4058_8097_db10c1aa2241.slice/crio-0098cf332e29102765007ea1e65bfbbf402e379f3422e07f406beb22172e6072 WatchSource:0}: Error finding container 0098cf332e29102765007ea1e65bfbbf402e379f3422e07f406beb22172e6072: Status 404 returned error can't find the container with id 0098cf332e29102765007ea1e65bfbbf402e379f3422e07f406beb22172e6072 Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.186350 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" event={"ID":"abf82987-f865-4058-8097-db10c1aa2241","Type":"ContainerStarted","Data":"d07e1a35206f4e4309c64cdf236a872ed8095da556dad179438e69e4d0975914"} Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.186775 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" event={"ID":"abf82987-f865-4058-8097-db10c1aa2241","Type":"ContainerStarted","Data":"0098cf332e29102765007ea1e65bfbbf402e379f3422e07f406beb22172e6072"} Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.186796 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.187548 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" event={"ID":"46744a76-5ba5-4173-ac29-5dc1f2b65954","Type":"ContainerStarted","Data":"71b5194edabca91548be698bc49512a297a2e96ea66057ae68989cec8e2e3563"} Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.187598 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" event={"ID":"46744a76-5ba5-4173-ac29-5dc1f2b65954","Type":"ContainerStarted","Data":"95d7373d2ecd548782b5f2606baaa5d2a53621c3f82aa1fd1f195adbd307dddc"} Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.191673 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.204838 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-577cbfcd68-n4zns" podStartSLOduration=3.204817712 podStartE2EDuration="3.204817712s" podCreationTimestamp="2026-02-19 08:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:32.201962193 +0000 UTC m=+229.859081141" watchObservedRunningTime="2026-02-19 08:04:32.204817712 +0000 UTC m=+229.861936660" Feb 19 08:04:32 crc kubenswrapper[5023]: I0219 08:04:32.234417 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" podStartSLOduration=3.234396042 podStartE2EDuration="3.234396042s" podCreationTimestamp="2026-02-19 08:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:04:32.233333993 +0000 UTC m=+229.890452941" watchObservedRunningTime="2026-02-19 08:04:32.234396042 +0000 UTC m=+229.891514990" Feb 19 08:04:33 crc kubenswrapper[5023]: I0219 08:04:33.194428 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:33 crc kubenswrapper[5023]: I0219 08:04:33.203141 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bf8b6c6fc-8jpg5" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.227285 5023 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.229400 5023 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.229464 5023 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.229637 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.229971 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4" gracePeriod=15 Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230006 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c" gracePeriod=15 Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230088 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b" gracePeriod=15 Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230106 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d" gracePeriod=15 Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230313 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230351 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230370 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230380 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230389 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230400 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230415 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230425 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230438 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230446 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230463 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230474 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:04:41 crc kubenswrapper[5023]: E0219 08:04:41.230486 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230494 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230659 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230675 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230693 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230704 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230713 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230726 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.230721 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc" gracePeriod=15 Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.236365 5023 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349652 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349706 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349737 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349772 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349797 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349817 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349841 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.349861 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.451838 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.451957 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.451996 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452047 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452091 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452124 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452152 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452189 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452283 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452336 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452369 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452397 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452428 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452456 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452485 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:41 crc kubenswrapper[5023]: I0219 08:04:41.452519 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.246433 5023 generic.go:334] "Generic (PLEG): container finished" podID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" containerID="689b97f553249a04f7e249032806de06b78f9443ce903c2b2773a52e1855d56a" exitCode=0 Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.246503 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"00505d77-b5f5-492f-8c8f-33817b2b0b8c","Type":"ContainerDied","Data":"689b97f553249a04f7e249032806de06b78f9443ce903c2b2773a52e1855d56a"} Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.247299 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.248603 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.249927 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.250516 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c" exitCode=0 Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.250538 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b" exitCode=0 Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.250547 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc" exitCode=0 Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.250555 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d" exitCode=2 Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.250585 5023 scope.go:117] "RemoveContainer" containerID="a4f810f3282997c5d0622d61beb8878c0092ccdd4abb3500cee3fffcd9aaa2cf" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.926064 5023 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.926932 5023 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.927636 5023 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.927942 5023 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.928184 5023 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:42 crc kubenswrapper[5023]: I0219 08:04:42.928217 5023 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 08:04:42 crc kubenswrapper[5023]: E0219 08:04:42.928400 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Feb 19 08:04:43 crc kubenswrapper[5023]: E0219 08:04:43.129911 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Feb 19 08:04:43 crc kubenswrapper[5023]: I0219 08:04:43.257988 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:04:43 crc kubenswrapper[5023]: I0219 08:04:43.479858 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:43 crc kubenswrapper[5023]: E0219 08:04:43.530789 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.143735 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.144722 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.145479 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.146239 5023 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.265719 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"00505d77-b5f5-492f-8c8f-33817b2b0b8c","Type":"ContainerDied","Data":"009b566da6f226d938fa987ce0d839dee7273118257b9d90bcf20fdb3890de23"} Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.266196 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="009b566da6f226d938fa987ce0d839dee7273118257b9d90bcf20fdb3890de23" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.268641 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.269364 5023 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4" exitCode=0 Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.269441 5023 scope.go:117] "RemoveContainer" containerID="fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.269457 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.276469 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.277042 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.277554 5023 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.290834 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.290893 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.290927 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.290966 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.291017 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.291091 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.291402 5023 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.291428 5023 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.291440 5023 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.303851 5023 scope.go:117] "RemoveContainer" containerID="dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.319007 5023 scope.go:117] "RemoveContainer" containerID="675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.331285 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.332892 5023 scope.go:117] "RemoveContainer" containerID="f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.345883 5023 scope.go:117] "RemoveContainer" containerID="7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.361760 5023 scope.go:117] "RemoveContainer" containerID="31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.392919 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access\") pod \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393057 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir\") pod \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393087 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock\") pod \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\" (UID: \"00505d77-b5f5-492f-8c8f-33817b2b0b8c\") " Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393142 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00505d77-b5f5-492f-8c8f-33817b2b0b8c" (UID: "00505d77-b5f5-492f-8c8f-33817b2b0b8c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393219 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock" (OuterVolumeSpecName: "var-lock") pod "00505d77-b5f5-492f-8c8f-33817b2b0b8c" (UID: "00505d77-b5f5-492f-8c8f-33817b2b0b8c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393740 5023 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.393764 5023 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00505d77-b5f5-492f-8c8f-33817b2b0b8c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.395513 5023 scope.go:117] "RemoveContainer" containerID="fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.397038 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\": container with ID starting with fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c not found: ID does not exist" containerID="fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.397089 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c"} err="failed to get container status \"fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\": rpc error: code = NotFound desc = could not find container \"fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c\": container with ID starting with fa1677a8a85133b17199796d67394468013dd67684684fa4be4a8c7cc2f8182c not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.397128 5023 scope.go:117] "RemoveContainer" containerID="dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.397525 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\": container with ID starting with dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b not found: ID does not exist" containerID="dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.397584 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b"} err="failed to get container status \"dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\": rpc error: code = NotFound desc = could not find container \"dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b\": container with ID starting with dbf13f744d72548deac440c4fe3cd7154ac226a9ce86908ca21540d68ab3114b not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.398040 5023 scope.go:117] "RemoveContainer" containerID="675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.398456 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\": container with ID starting with 675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc not found: ID does not exist" containerID="675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.398499 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc"} err="failed to get container status \"675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\": rpc error: code = NotFound desc = could not find container \"675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc\": container with ID starting with 675b6ab3eebd77c2d4dab5ddfd4f29f0b6b7163c55c6db69e7b1b87d9cbe15fc not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.398532 5023 scope.go:117] "RemoveContainer" containerID="f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.398868 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\": container with ID starting with f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d not found: ID does not exist" containerID="f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.398922 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d"} err="failed to get container status \"f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\": rpc error: code = NotFound desc = could not find container \"f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d\": container with ID starting with f81b82150aec2fc015d133c4420738668bcaff08c2afe407ce7a26d5d7ff130d not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.398938 5023 scope.go:117] "RemoveContainer" containerID="7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.399286 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\": container with ID starting with 7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4 not found: ID does not exist" containerID="7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.399324 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4"} err="failed to get container status \"7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\": rpc error: code = NotFound desc = could not find container \"7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4\": container with ID starting with 7d245732531a7448d6b267f53e4f104f23b82e002591e2d5e0e6c09277bbd1d4 not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.399347 5023 scope.go:117] "RemoveContainer" containerID="31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.399569 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00505d77-b5f5-492f-8c8f-33817b2b0b8c" (UID: "00505d77-b5f5-492f-8c8f-33817b2b0b8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.399667 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\": container with ID starting with 31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f not found: ID does not exist" containerID="31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.399702 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f"} err="failed to get container status \"31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\": rpc error: code = NotFound desc = could not find container \"31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f\": container with ID starting with 31bea0f58a0ce7fe6411b8e21d0b884236f98f1a73693d361176f87dd7af546f not found: ID does not exist" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.495770 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00505d77-b5f5-492f-8c8f-33817b2b0b8c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.585289 5023 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: I0219 08:04:44.585641 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.681263 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:04:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:04:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:04:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T08:04:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.686239 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.687008 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.687250 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.687430 5023 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:44 crc kubenswrapper[5023]: E0219 08:04:44.687445 5023 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 08:04:45 crc kubenswrapper[5023]: I0219 08:04:45.275318 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 08:04:45 crc kubenswrapper[5023]: I0219 08:04:45.287202 5023 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:45 crc kubenswrapper[5023]: I0219 08:04:45.287759 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:45 crc kubenswrapper[5023]: I0219 08:04:45.488515 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 08:04:45 crc kubenswrapper[5023]: E0219 08:04:45.932112 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="3.2s" Feb 19 08:04:46 crc kubenswrapper[5023]: E0219 08:04:46.257530 5023 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:46 crc kubenswrapper[5023]: I0219 08:04:46.258086 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:46 crc kubenswrapper[5023]: W0219 08:04:46.279186 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b51b7d899c537867907eb26213d28ef0d499a8f1bc48e66d6ddd2df83fed9dfc WatchSource:0}: Error finding container b51b7d899c537867907eb26213d28ef0d499a8f1bc48e66d6ddd2df83fed9dfc: Status 404 returned error can't find the container with id b51b7d899c537867907eb26213d28ef0d499a8f1bc48e66d6ddd2df83fed9dfc Feb 19 08:04:46 crc kubenswrapper[5023]: E0219 08:04:46.283200 5023 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18959735eba475ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:04:46.282642924 +0000 UTC m=+243.939761872,LastTimestamp:2026-02-19 08:04:46.282642924 +0000 UTC m=+243.939761872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:04:47 crc kubenswrapper[5023]: I0219 08:04:47.295520 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca"} Feb 19 08:04:47 crc kubenswrapper[5023]: I0219 08:04:47.295851 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b51b7d899c537867907eb26213d28ef0d499a8f1bc48e66d6ddd2df83fed9dfc"} Feb 19 08:04:47 crc kubenswrapper[5023]: E0219 08:04:47.296455 5023 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:04:47 crc kubenswrapper[5023]: I0219 08:04:47.296511 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:49 crc kubenswrapper[5023]: E0219 08:04:49.134458 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="6.4s" Feb 19 08:04:53 crc kubenswrapper[5023]: I0219 08:04:53.480550 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.341164 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.341259 5023 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129" exitCode=1 Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.341316 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129"} Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.342561 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.342993 5023 scope.go:117] "RemoveContainer" containerID="a9b09c9b55438b956623ef074b758098e29ee841ededd92b8c93e2744b74c129" Feb 19 08:04:54 crc kubenswrapper[5023]: I0219 08:04:54.343296 5023 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:55 crc kubenswrapper[5023]: I0219 08:04:55.351325 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 08:04:55 crc kubenswrapper[5023]: I0219 08:04:55.351663 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ceabe00727cad7f292567eb5c6adaf1b3197f92aa69347dddeed3a6124d9054c"} Feb 19 08:04:55 crc kubenswrapper[5023]: I0219 08:04:55.352656 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:55 crc kubenswrapper[5023]: I0219 08:04:55.353319 5023 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:55 crc kubenswrapper[5023]: E0219 08:04:55.536109 5023 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="7s" Feb 19 08:04:55 crc kubenswrapper[5023]: E0219 08:04:55.865677 5023 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18959735eba475ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 08:04:46.282642924 +0000 UTC m=+243.939761872,LastTimestamp:2026-02-19 08:04:46.282642924 +0000 UTC m=+243.939761872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.475785 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.477103 5023 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.477737 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.496072 5023 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.496109 5023 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:56 crc kubenswrapper[5023]: E0219 08:04:56.501261 5023 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:56 crc kubenswrapper[5023]: I0219 08:04:56.504021 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:56 crc kubenswrapper[5023]: W0219 08:04:56.541604 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6b56cc1362acb5eb6b843618462a7058950fb639b7be0187e90ff0536ec2864c WatchSource:0}: Error finding container 6b56cc1362acb5eb6b843618462a7058950fb639b7be0187e90ff0536ec2864c: Status 404 returned error can't find the container with id 6b56cc1362acb5eb6b843618462a7058950fb639b7be0187e90ff0536ec2864c Feb 19 08:04:56 crc kubenswrapper[5023]: E0219 08:04:56.859096 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-conmon-2b6558bab88dd1182c9768e6bdf62d42faf0319100f0f39abdeb6ff5720d0e90.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.365293 5023 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2b6558bab88dd1182c9768e6bdf62d42faf0319100f0f39abdeb6ff5720d0e90" exitCode=0 Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.365424 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2b6558bab88dd1182c9768e6bdf62d42faf0319100f0f39abdeb6ff5720d0e90"} Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.365577 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6b56cc1362acb5eb6b843618462a7058950fb639b7be0187e90ff0536ec2864c"} Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.365861 5023 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.365874 5023 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.366435 5023 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:57 crc kubenswrapper[5023]: E0219 08:04:57.366449 5023 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:04:57 crc kubenswrapper[5023]: I0219 08:04:57.366841 5023 status_manager.go:851] "Failed to get status for pod" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Feb 19 08:04:58 crc kubenswrapper[5023]: I0219 08:04:58.373048 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dce326b5883309cb3c61fd19d68000940fca74ecd6216c601569d7270861e002"} Feb 19 08:04:58 crc kubenswrapper[5023]: I0219 08:04:58.374671 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"88852489abc486102d7a1655d00561e919e464d46c988944a6541d92e6888f6d"} Feb 19 08:04:58 crc kubenswrapper[5023]: I0219 08:04:58.374789 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec19e7f3569323167de4ad33d9b8879aee77a30a8e240bf66bcf9e8fd4572328"} Feb 19 08:04:59 crc kubenswrapper[5023]: I0219 08:04:59.383447 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad495dff2b99cb7ec7a12e92f815606b64381a29eb7d6f0b5729243ae299ba02"} Feb 19 08:04:59 crc kubenswrapper[5023]: I0219 08:04:59.384214 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58d469f0b44a6645d3c6c4e79db373e710ad076cc6ce9d58a9b240e45b889095"} Feb 19 08:04:59 crc kubenswrapper[5023]: I0219 08:04:59.384162 5023 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:59 crc kubenswrapper[5023]: I0219 08:04:59.384348 5023 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:04:59 crc kubenswrapper[5023]: I0219 08:04:59.384561 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:00 crc kubenswrapper[5023]: I0219 08:05:00.941494 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:05:00 crc kubenswrapper[5023]: I0219 08:05:00.946306 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:05:01 crc kubenswrapper[5023]: I0219 08:05:01.394388 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:05:01 crc kubenswrapper[5023]: I0219 08:05:01.504766 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:01 crc kubenswrapper[5023]: I0219 08:05:01.504836 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:01 crc kubenswrapper[5023]: I0219 08:05:01.511841 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:04 crc kubenswrapper[5023]: I0219 08:05:04.394447 5023 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:04 crc kubenswrapper[5023]: I0219 08:05:04.442981 5023 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="79d456ce-1b43-45eb-a439-eb4bbad1342f" Feb 19 08:05:05 crc kubenswrapper[5023]: I0219 08:05:05.427614 5023 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:05:05 crc kubenswrapper[5023]: I0219 08:05:05.427685 5023 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:05:05 crc kubenswrapper[5023]: I0219 08:05:05.431867 5023 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="79d456ce-1b43-45eb-a439-eb4bbad1342f" Feb 19 08:05:05 crc kubenswrapper[5023]: I0219 08:05:05.432452 5023 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ec19e7f3569323167de4ad33d9b8879aee77a30a8e240bf66bcf9e8fd4572328" Feb 19 08:05:05 crc kubenswrapper[5023]: I0219 08:05:05.432488 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:06 crc kubenswrapper[5023]: I0219 08:05:06.433028 5023 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:05:06 crc kubenswrapper[5023]: I0219 08:05:06.433065 5023 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ddb71723-0da9-449c-9fbd-8acfc7e7da29" Feb 19 08:05:06 crc kubenswrapper[5023]: I0219 08:05:06.436074 5023 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="79d456ce-1b43-45eb-a439-eb4bbad1342f" Feb 19 08:05:12 crc kubenswrapper[5023]: I0219 08:05:12.455469 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.118809 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.170316 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.219092 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.219572 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.323524 5023 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.399913 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.815342 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 08:05:14 crc kubenswrapper[5023]: I0219 08:05:14.960719 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.056060 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.108747 5023 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.199739 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.292499 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.379476 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.853164 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:05:15 crc kubenswrapper[5023]: I0219 08:05:15.910492 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.159824 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.182721 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.357404 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.377502 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.495084 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.578613 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.710063 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.752426 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.814033 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.817345 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.888688 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.927820 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 08:05:16 crc kubenswrapper[5023]: I0219 08:05:16.996126 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.105729 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.124932 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.203221 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.211347 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.218647 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.350792 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.551856 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.558127 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.616373 5023 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.640518 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 08:05:17 crc kubenswrapper[5023]: I0219 08:05:17.828284 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.039852 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.110089 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.115511 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.152354 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.154706 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.171442 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.224801 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.316397 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.344298 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.414866 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.445419 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.478356 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.492426 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.542609 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.609223 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.622497 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.757228 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.759539 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 08:05:18 crc kubenswrapper[5023]: I0219 08:05:18.872570 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.080075 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.102829 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.123027 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.231725 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.319279 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.337668 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.358478 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.363863 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.419425 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.448568 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.641027 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.685026 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.729482 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.735646 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.760715 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.772781 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.778114 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.803301 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.824658 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 08:05:19 crc kubenswrapper[5023]: I0219 08:05:19.901349 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.012997 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.029328 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.048135 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.153947 5023 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.159595 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.165597 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.256213 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.262837 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.274155 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.290086 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.290287 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.439553 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.485324 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.581459 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.605286 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.663225 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.671730 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.732993 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.767226 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.801287 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.804060 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.937821 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 08:05:20 crc kubenswrapper[5023]: I0219 08:05:20.939897 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.022565 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.045412 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.077788 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.096157 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.223856 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.320297 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.401474 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.463173 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.473078 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.626868 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.676960 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.720536 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.832777 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.946531 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 08:05:21 crc kubenswrapper[5023]: I0219 08:05:21.952805 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.017854 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.027173 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.120929 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.129728 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.160103 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.234786 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.342035 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.362852 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.579337 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.785977 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.803903 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.827084 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.952027 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 08:05:22 crc kubenswrapper[5023]: I0219 08:05:22.985360 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.055213 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.222906 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.431837 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.544290 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.562154 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.583362 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.606581 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.632996 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.723203 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.761273 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.784295 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.810972 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.822013 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.848558 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.942450 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:05:23 crc kubenswrapper[5023]: I0219 08:05:23.994253 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.023925 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.045834 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.045914 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.112778 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.136590 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.180310 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.227358 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.329751 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.347282 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.421896 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.422810 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.446452 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.490783 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.495919 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.513796 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.640083 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.692697 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.694991 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.708967 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.789488 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.795847 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.840872 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.843577 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.846712 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.881131 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 08:05:24 crc kubenswrapper[5023]: I0219 08:05:24.911965 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.039563 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.077749 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.160822 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.202063 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.270795 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.274434 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.292442 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.307650 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.392270 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.472716 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.488324 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.504245 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.511582 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.712263 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.724708 5023 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.729080 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.729135 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.734993 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.791840 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.791816895 podStartE2EDuration="21.791816895s" podCreationTimestamp="2026-02-19 08:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:05:25.753682191 +0000 UTC m=+283.410801159" watchObservedRunningTime="2026-02-19 08:05:25.791816895 +0000 UTC m=+283.448935843" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.793483 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.854185 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.889865 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 08:05:25 crc kubenswrapper[5023]: I0219 08:05:25.935388 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.004711 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.020097 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.075279 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.139776 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.159186 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.336068 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.438084 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.453368 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.458444 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.537955 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.633418 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.646023 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.680411 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.725415 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.852057 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.872902 5023 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.873151 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca" gracePeriod=5 Feb 19 08:05:26 crc kubenswrapper[5023]: I0219 08:05:26.909168 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.015084 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.031176 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.208863 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.293014 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.311672 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.323907 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.378060 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.401449 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.458070 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.542979 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.759399 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.779687 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.877736 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.921068 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 08:05:27 crc kubenswrapper[5023]: I0219 08:05:27.945195 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.025705 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.034597 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.156234 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.216770 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.346792 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.514429 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 08:05:28 crc kubenswrapper[5023]: I0219 08:05:28.551234 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.043181 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.089687 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.098779 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.365740 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.612161 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.644054 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.677437 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.680356 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.891830 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 08:05:29 crc kubenswrapper[5023]: I0219 08:05:29.958549 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 08:05:30 crc kubenswrapper[5023]: I0219 08:05:30.099598 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 08:05:30 crc kubenswrapper[5023]: I0219 08:05:30.284661 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 08:05:30 crc kubenswrapper[5023]: I0219 08:05:30.408295 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.459656 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.460408 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.468217 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.469035 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q274g" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="registry-server" containerID="cri-o://9d6b3faf4981e1cfdda319f7342d56e55fdcf2e7259cc5c8a36a365ac608f65b" gracePeriod=30 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.483087 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.483651 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hmqg6" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" containerID="cri-o://90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" gracePeriod=30 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.489668 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.489869 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" containerID="cri-o://897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688" gracePeriod=30 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.495957 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.496210 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mcd4q" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="registry-server" containerID="cri-o://8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316" gracePeriod=30 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.513547 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.513832 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2cdmv" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="registry-server" containerID="cri-o://c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345" gracePeriod=30 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.522205 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqn9"] Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.522568 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.522584 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.522595 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" containerName="installer" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.522602 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" containerName="installer" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.522743 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="00505d77-b5f5-492f-8c8f-33817b2b0b8c" containerName="installer" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.522753 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.523128 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533193 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533236 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533252 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533413 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533482 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzrdr\" (UniqueName: \"kubernetes.io/projected/6708d9d6-f225-4977-9446-8c2374e80e18-kube-api-access-wzrdr\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.533516 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.534072 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqn9"] Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.534773 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.534776 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.548898 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.595938 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 is running failed: container process not found" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.597567 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 is running failed: container process not found" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.598134 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 is running failed: container process not found" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.598167 5023 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hmqg6" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.604440 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.604501 5023 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca" exitCode=137 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.604601 5023 scope.go:117] "RemoveContainer" containerID="5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.604754 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.607794 5023 generic.go:334] "Generic (PLEG): container finished" podID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerID="9d6b3faf4981e1cfdda319f7342d56e55fdcf2e7259cc5c8a36a365ac608f65b" exitCode=0 Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.607848 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerDied","Data":"9d6b3faf4981e1cfdda319f7342d56e55fdcf2e7259cc5c8a36a365ac608f65b"} Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634101 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634308 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634298 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634413 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634481 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzrdr\" (UniqueName: \"kubernetes.io/projected/6708d9d6-f225-4977-9446-8c2374e80e18-kube-api-access-wzrdr\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634521 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634580 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634661 5023 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634672 5023 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634684 5023 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.634694 5023 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.635115 5023 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.636045 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.639331 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6708d9d6-f225-4977-9446-8c2374e80e18-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.656067 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzrdr\" (UniqueName: \"kubernetes.io/projected/6708d9d6-f225-4977-9446-8c2374e80e18-kube-api-access-wzrdr\") pod \"marketplace-operator-79b997595-2zqn9\" (UID: \"6708d9d6-f225-4977-9446-8c2374e80e18\") " pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.900397 5023 scope.go:117] "RemoveContainer" containerID="5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca" Feb 19 08:05:32 crc kubenswrapper[5023]: E0219 08:05:32.901135 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca\": container with ID starting with 5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca not found: ID does not exist" containerID="5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.901177 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca"} err="failed to get container status \"5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca\": rpc error: code = NotFound desc = could not find container \"5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca\": container with ID starting with 5418494dd175b361e48008176f9c96c2c04310ee1346a54ce3483e30f7de59ca not found: ID does not exist" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.914510 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.918766 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.939060 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content\") pod \"4d82228e-e1cf-4274-8b24-5468d4c46e38\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.939150 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlm44\" (UniqueName: \"kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44\") pod \"4d82228e-e1cf-4274-8b24-5468d4c46e38\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.939262 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities\") pod \"4d82228e-e1cf-4274-8b24-5468d4c46e38\" (UID: \"4d82228e-e1cf-4274-8b24-5468d4c46e38\") " Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.942436 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities" (OuterVolumeSpecName: "utilities") pod "4d82228e-e1cf-4274-8b24-5468d4c46e38" (UID: "4d82228e-e1cf-4274-8b24-5468d4c46e38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:32 crc kubenswrapper[5023]: I0219 08:05:32.952848 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44" (OuterVolumeSpecName: "kube-api-access-wlm44") pod "4d82228e-e1cf-4274-8b24-5468d4c46e38" (UID: "4d82228e-e1cf-4274-8b24-5468d4c46e38"). InnerVolumeSpecName "kube-api-access-wlm44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.040646 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.040708 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlm44\" (UniqueName: \"kubernetes.io/projected/4d82228e-e1cf-4274-8b24-5468d4c46e38-kube-api-access-wlm44\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.060283 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d82228e-e1cf-4274-8b24-5468d4c46e38" (UID: "4d82228e-e1cf-4274-8b24-5468d4c46e38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.142328 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d82228e-e1cf-4274-8b24-5468d4c46e38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.162365 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.168402 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.178487 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.178674 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252479 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmvwq\" (UniqueName: \"kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq\") pod \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252545 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content\") pod \"1f33f560-79f7-4acd-b439-22e6969ca87c\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252589 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities\") pod \"3821bfef-83d2-421f-b316-00e277a9341d\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252639 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics\") pod \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252707 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqrl5\" (UniqueName: \"kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5\") pod \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252748 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content\") pod \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252777 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities\") pod \"1f33f560-79f7-4acd-b439-22e6969ca87c\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252794 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities\") pod \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\" (UID: \"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252817 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content\") pod \"3821bfef-83d2-421f-b316-00e277a9341d\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252889 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca\") pod \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\" (UID: \"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252926 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9tf\" (UniqueName: \"kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf\") pod \"3821bfef-83d2-421f-b316-00e277a9341d\" (UID: \"3821bfef-83d2-421f-b316-00e277a9341d\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.252968 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2znww\" (UniqueName: \"kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww\") pod \"1f33f560-79f7-4acd-b439-22e6969ca87c\" (UID: \"1f33f560-79f7-4acd-b439-22e6969ca87c\") " Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.253360 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities" (OuterVolumeSpecName: "utilities") pod "3821bfef-83d2-421f-b316-00e277a9341d" (UID: "3821bfef-83d2-421f-b316-00e277a9341d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.253847 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities" (OuterVolumeSpecName: "utilities") pod "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" (UID: "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.254850 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities" (OuterVolumeSpecName: "utilities") pod "1f33f560-79f7-4acd-b439-22e6969ca87c" (UID: "1f33f560-79f7-4acd-b439-22e6969ca87c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.256214 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5" (OuterVolumeSpecName: "kube-api-access-pqrl5") pod "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" (UID: "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62"). InnerVolumeSpecName "kube-api-access-pqrl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.256425 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" (UID: "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.256799 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf" (OuterVolumeSpecName: "kube-api-access-4n9tf") pod "3821bfef-83d2-421f-b316-00e277a9341d" (UID: "3821bfef-83d2-421f-b316-00e277a9341d"). InnerVolumeSpecName "kube-api-access-4n9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.257037 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" (UID: "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.257107 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww" (OuterVolumeSpecName: "kube-api-access-2znww") pod "1f33f560-79f7-4acd-b439-22e6969ca87c" (UID: "1f33f560-79f7-4acd-b439-22e6969ca87c"). InnerVolumeSpecName "kube-api-access-2znww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.257405 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq" (OuterVolumeSpecName: "kube-api-access-fmvwq") pod "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" (UID: "d815d7e3-52ce-4396-8e3d-9ccbcec21fa1"). InnerVolumeSpecName "kube-api-access-fmvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.283523 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3821bfef-83d2-421f-b316-00e277a9341d" (UID: "3821bfef-83d2-421f-b316-00e277a9341d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.314136 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f33f560-79f7-4acd-b439-22e6969ca87c" (UID: "1f33f560-79f7-4acd-b439-22e6969ca87c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354408 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmvwq\" (UniqueName: \"kubernetes.io/projected/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-kube-api-access-fmvwq\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354689 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354763 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354825 5023 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354901 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqrl5\" (UniqueName: \"kubernetes.io/projected/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-kube-api-access-pqrl5\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.354963 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33f560-79f7-4acd-b439-22e6969ca87c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.355019 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.355074 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3821bfef-83d2-421f-b316-00e277a9341d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.355196 5023 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.355263 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9tf\" (UniqueName: \"kubernetes.io/projected/3821bfef-83d2-421f-b316-00e277a9341d-kube-api-access-4n9tf\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.355327 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2znww\" (UniqueName: \"kubernetes.io/projected/1f33f560-79f7-4acd-b439-22e6969ca87c-kube-api-access-2znww\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.375707 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" (UID: "5a796ff8-5fc9-4115-a9ed-e9367a9d6c62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.456896 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.470927 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2zqn9"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.488283 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.619866 5023 generic.go:334] "Generic (PLEG): container finished" podID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerID="c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345" exitCode=0 Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.619948 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cdmv" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.619970 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerDied","Data":"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.620311 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cdmv" event={"ID":"5a796ff8-5fc9-4115-a9ed-e9367a9d6c62","Type":"ContainerDied","Data":"7aa0fba1c5430462f9aca60320924a17dd0f60898a51f88b2cbf1f613b65ef19"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.620346 5023 scope.go:117] "RemoveContainer" containerID="c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.626161 5023 generic.go:334] "Generic (PLEG): container finished" podID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" exitCode=0 Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.626243 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerDied","Data":"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.626300 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmqg6" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.626327 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmqg6" event={"ID":"1f33f560-79f7-4acd-b439-22e6969ca87c","Type":"ContainerDied","Data":"f224f4e37d4896aa33a3ef3d5a4d679597433522e4a6f651fabad2e4334819b6"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.628554 5023 generic.go:334] "Generic (PLEG): container finished" podID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerID="897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688" exitCode=0 Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.628713 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" event={"ID":"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1","Type":"ContainerDied","Data":"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.628745 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" event={"ID":"d815d7e3-52ce-4396-8e3d-9ccbcec21fa1","Type":"ContainerDied","Data":"1e6a3b728eac6fb72df4c80e53f9425bfdde6cee3057d3bd0dfd24dea7f773ff"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.628743 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xxg6k" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.632930 5023 generic.go:334] "Generic (PLEG): container finished" podID="3821bfef-83d2-421f-b316-00e277a9341d" containerID="8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316" exitCode=0 Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.633025 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcd4q" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.633188 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerDied","Data":"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.633310 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcd4q" event={"ID":"3821bfef-83d2-421f-b316-00e277a9341d","Type":"ContainerDied","Data":"518b2ac7b53552ee86dadf63bc8cb692ce1346b979857dd3bc7ccceff189a764"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.639389 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.640568 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" event={"ID":"6708d9d6-f225-4977-9446-8c2374e80e18","Type":"ContainerStarted","Data":"f862b3f556f67bb7690fc6ea71728e1b0c21c67c9ab0adebf1111ffb81e5e81b"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.641115 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.642796 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2cdmv"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.643807 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q274g" event={"ID":"4d82228e-e1cf-4274-8b24-5468d4c46e38","Type":"ContainerDied","Data":"6c8cad5b694e55fd419c70856a6f1fd5d100b364e8a4ba93b52b070cd7bd06ef"} Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.643978 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q274g" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.645565 5023 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2zqn9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.645663 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" podUID="6708d9d6-f225-4977-9446-8c2374e80e18" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.650747 5023 scope.go:117] "RemoveContainer" containerID="28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.672767 5023 scope.go:117] "RemoveContainer" containerID="a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.682747 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" podStartSLOduration=1.6827157719999999 podStartE2EDuration="1.682715772s" podCreationTimestamp="2026-02-19 08:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:05:33.676844775 +0000 UTC m=+291.333963743" watchObservedRunningTime="2026-02-19 08:05:33.682715772 +0000 UTC m=+291.339834720" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.697375 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.700789 5023 scope.go:117] "RemoveContainer" containerID="c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.701728 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345\": container with ID starting with c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345 not found: ID does not exist" containerID="c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.701837 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345"} err="failed to get container status \"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345\": rpc error: code = NotFound desc = could not find container \"c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345\": container with ID starting with c51886501fc0d097d37fd3533056f3fdba3f678b690a905660db8bb4fc0ce345 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.701928 5023 scope.go:117] "RemoveContainer" containerID="28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.702564 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0\": container with ID starting with 28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0 not found: ID does not exist" containerID="28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.702609 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0"} err="failed to get container status \"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0\": rpc error: code = NotFound desc = could not find container \"28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0\": container with ID starting with 28781dc42a55955c4d25c03ef848ba475b49f103034a42b73f2f9e8c8ca4add0 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.702662 5023 scope.go:117] "RemoveContainer" containerID="a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.702978 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcd4q"] Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.703162 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700\": container with ID starting with a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700 not found: ID does not exist" containerID="a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.703224 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700"} err="failed to get container status \"a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700\": rpc error: code = NotFound desc = could not find container \"a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700\": container with ID starting with a6e2b47cbe96b3e0d9df0c9b97788817b862742455434b8a112effdfb403a700 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.703258 5023 scope.go:117] "RemoveContainer" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.711269 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.715568 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xxg6k"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.722070 5023 scope.go:117] "RemoveContainer" containerID="f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.722611 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.743256 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q274g"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.743671 5023 scope.go:117] "RemoveContainer" containerID="87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.747852 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.752269 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hmqg6"] Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.757006 5023 scope.go:117] "RemoveContainer" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.757424 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87\": container with ID starting with 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 not found: ID does not exist" containerID="90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.757495 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87"} err="failed to get container status \"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87\": rpc error: code = NotFound desc = could not find container \"90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87\": container with ID starting with 90ef7e7a243e7de024696c75cc6ca5b0744b219123567f42151236f944487e87 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.757539 5023 scope.go:117] "RemoveContainer" containerID="f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.757985 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c\": container with ID starting with f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c not found: ID does not exist" containerID="f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.758106 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c"} err="failed to get container status \"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c\": rpc error: code = NotFound desc = could not find container \"f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c\": container with ID starting with f1416cc1d925a0efd4fac2191b368d35f608b6dbc7a42ba7f0fadf00883a571c not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.758194 5023 scope.go:117] "RemoveContainer" containerID="87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.758773 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4\": container with ID starting with 87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4 not found: ID does not exist" containerID="87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.758818 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4"} err="failed to get container status \"87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4\": rpc error: code = NotFound desc = could not find container \"87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4\": container with ID starting with 87cd22f011ef40a653d7a776072e7741342e82286eb80c6651215862f05737b4 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.758852 5023 scope.go:117] "RemoveContainer" containerID="897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.772813 5023 scope.go:117] "RemoveContainer" containerID="897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.773335 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688\": container with ID starting with 897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688 not found: ID does not exist" containerID="897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.773443 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688"} err="failed to get container status \"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688\": rpc error: code = NotFound desc = could not find container \"897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688\": container with ID starting with 897cd395639e96cd9a8ac16731b2292b8db64ce12da6f675d3d0960edd0c3688 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.773558 5023 scope.go:117] "RemoveContainer" containerID="8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.785750 5023 scope.go:117] "RemoveContainer" containerID="070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.799657 5023 scope.go:117] "RemoveContainer" containerID="17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.812726 5023 scope.go:117] "RemoveContainer" containerID="8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.813549 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316\": container with ID starting with 8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316 not found: ID does not exist" containerID="8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.813592 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316"} err="failed to get container status \"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316\": rpc error: code = NotFound desc = could not find container \"8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316\": container with ID starting with 8fb6632b73d887828baa2c816629a6034b29967274f3bc78c6053a54fb134316 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.813636 5023 scope.go:117] "RemoveContainer" containerID="070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.814053 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7\": container with ID starting with 070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7 not found: ID does not exist" containerID="070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.814196 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7"} err="failed to get container status \"070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7\": rpc error: code = NotFound desc = could not find container \"070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7\": container with ID starting with 070a364b74800a9d5e218cd32c447d827be0408e76dad6900bde32c397ba4ef7 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.814318 5023 scope.go:117] "RemoveContainer" containerID="17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8" Feb 19 08:05:33 crc kubenswrapper[5023]: E0219 08:05:33.814947 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8\": container with ID starting with 17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8 not found: ID does not exist" containerID="17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.814971 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8"} err="failed to get container status \"17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8\": rpc error: code = NotFound desc = could not find container \"17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8\": container with ID starting with 17e2740cec228901b121a68986f752fe4a5cd2bc7a824c5dc5635587c0ba83d8 not found: ID does not exist" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.814992 5023 scope.go:117] "RemoveContainer" containerID="9d6b3faf4981e1cfdda319f7342d56e55fdcf2e7259cc5c8a36a365ac608f65b" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.834581 5023 scope.go:117] "RemoveContainer" containerID="4ffabddcc35b30532f58ee7fd852fb540a9bd6dae55d3e6149550ff51dc11cc1" Feb 19 08:05:33 crc kubenswrapper[5023]: I0219 08:05:33.850119 5023 scope.go:117] "RemoveContainer" containerID="ea5d0664c794877cb7931705148bac489d83b578556c33cdb651210aa5cc39d3" Feb 19 08:05:34 crc kubenswrapper[5023]: I0219 08:05:34.651603 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" event={"ID":"6708d9d6-f225-4977-9446-8c2374e80e18","Type":"ContainerStarted","Data":"1e462cb9be4456e7dd28472650eb8dac11a3a112e312d521373c1046b7bb0811"} Feb 19 08:05:34 crc kubenswrapper[5023]: I0219 08:05:34.654969 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2zqn9" Feb 19 08:05:35 crc kubenswrapper[5023]: I0219 08:05:35.488491 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" path="/var/lib/kubelet/pods/1f33f560-79f7-4acd-b439-22e6969ca87c/volumes" Feb 19 08:05:35 crc kubenswrapper[5023]: I0219 08:05:35.489818 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3821bfef-83d2-421f-b316-00e277a9341d" path="/var/lib/kubelet/pods/3821bfef-83d2-421f-b316-00e277a9341d/volumes" Feb 19 08:05:35 crc kubenswrapper[5023]: I0219 08:05:35.491078 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" path="/var/lib/kubelet/pods/4d82228e-e1cf-4274-8b24-5468d4c46e38/volumes" Feb 19 08:05:35 crc kubenswrapper[5023]: I0219 08:05:35.493136 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" path="/var/lib/kubelet/pods/5a796ff8-5fc9-4115-a9ed-e9367a9d6c62/volumes" Feb 19 08:05:35 crc kubenswrapper[5023]: I0219 08:05:35.494429 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" path="/var/lib/kubelet/pods/d815d7e3-52ce-4396-8e3d-9ccbcec21fa1/volumes" Feb 19 08:05:43 crc kubenswrapper[5023]: I0219 08:05:43.299204 5023 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 08:05:48 crc kubenswrapper[5023]: I0219 08:05:48.611433 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 08:05:50 crc kubenswrapper[5023]: I0219 08:05:50.319496 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 08:05:50 crc kubenswrapper[5023]: I0219 08:05:50.576299 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 08:05:55 crc kubenswrapper[5023]: I0219 08:05:55.013095 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 08:05:55 crc kubenswrapper[5023]: I0219 08:05:55.684309 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 08:05:57 crc kubenswrapper[5023]: I0219 08:05:57.151095 5023 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 08:05:59 crc kubenswrapper[5023]: I0219 08:05:59.340220 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 08:06:00 crc kubenswrapper[5023]: I0219 08:06:00.329804 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 08:06:02 crc kubenswrapper[5023]: I0219 08:06:02.615753 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 08:06:06 crc kubenswrapper[5023]: I0219 08:06:06.101942 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.908749 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tkmgb"] Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909665 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909681 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909696 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909705 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909718 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909725 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909735 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909742 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909753 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909760 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909772 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909779 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909790 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909797 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909806 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909813 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="extract-utilities" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909823 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909830 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909841 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909848 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909855 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909862 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.909872 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.909879 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" Feb 19 08:06:26 crc kubenswrapper[5023]: E0219 08:06:26.910017 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910027 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="extract-content" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910144 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a796ff8-5fc9-4115-a9ed-e9367a9d6c62" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910161 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f33f560-79f7-4acd-b439-22e6969ca87c" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910171 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d82228e-e1cf-4274-8b24-5468d4c46e38" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910179 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3821bfef-83d2-421f-b316-00e277a9341d" containerName="registry-server" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.910190 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d815d7e3-52ce-4396-8e3d-9ccbcec21fa1" containerName="marketplace-operator" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.911131 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.913321 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.928993 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkmgb"] Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.972311 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-utilities\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.972361 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27w26\" (UniqueName: \"kubernetes.io/projected/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-kube-api-access-27w26\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:26 crc kubenswrapper[5023]: I0219 08:06:26.972858 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-catalog-content\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.074730 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-catalog-content\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.074814 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-utilities\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.074851 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27w26\" (UniqueName: \"kubernetes.io/projected/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-kube-api-access-27w26\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.075646 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-utilities\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.075845 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-catalog-content\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.107686 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqk9z"] Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.109837 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.114437 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.118598 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27w26\" (UniqueName: \"kubernetes.io/projected/eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627-kube-api-access-27w26\") pod \"certified-operators-tkmgb\" (UID: \"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627\") " pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.127276 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqk9z"] Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.176647 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-utilities\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.176705 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvbh\" (UniqueName: \"kubernetes.io/projected/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-kube-api-access-mwvbh\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.176787 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-catalog-content\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.233150 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.278209 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-utilities\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.278269 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvbh\" (UniqueName: \"kubernetes.io/projected/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-kube-api-access-mwvbh\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.278330 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-catalog-content\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.278851 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-catalog-content\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.279039 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-utilities\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.297066 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvbh\" (UniqueName: \"kubernetes.io/projected/e0d2964c-4c2f-4c86-bcf9-a5e574c18629-kube-api-access-mwvbh\") pod \"community-operators-pqk9z\" (UID: \"e0d2964c-4c2f-4c86-bcf9-a5e574c18629\") " pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.442590 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.620951 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkmgb"] Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.830889 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqk9z"] Feb 19 08:06:27 crc kubenswrapper[5023]: W0219 08:06:27.833123 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0d2964c_4c2f_4c86_bcf9_a5e574c18629.slice/crio-395e0e18b7da0fba0d86a053415abbaa09aeed0c8b074c74f5fe0b6216409efe WatchSource:0}: Error finding container 395e0e18b7da0fba0d86a053415abbaa09aeed0c8b074c74f5fe0b6216409efe: Status 404 returned error can't find the container with id 395e0e18b7da0fba0d86a053415abbaa09aeed0c8b074c74f5fe0b6216409efe Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.959067 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqk9z" event={"ID":"e0d2964c-4c2f-4c86-bcf9-a5e574c18629","Type":"ContainerStarted","Data":"b1fb3b86065b6bfb2872e9c06579dbc82b579d8cadc09dcced2a49492316f1ca"} Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.959125 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqk9z" event={"ID":"e0d2964c-4c2f-4c86-bcf9-a5e574c18629","Type":"ContainerStarted","Data":"395e0e18b7da0fba0d86a053415abbaa09aeed0c8b074c74f5fe0b6216409efe"} Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.960828 5023 generic.go:334] "Generic (PLEG): container finished" podID="eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627" containerID="92701af154bf9b187375d5c97f55a54227b5319eeabfe4cdbe70b771e030ca38" exitCode=0 Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.960882 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkmgb" event={"ID":"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627","Type":"ContainerDied","Data":"92701af154bf9b187375d5c97f55a54227b5319eeabfe4cdbe70b771e030ca38"} Feb 19 08:06:27 crc kubenswrapper[5023]: I0219 08:06:27.960920 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkmgb" event={"ID":"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627","Type":"ContainerStarted","Data":"fae26e6ac62397d52a4b76a010a1e6b968f8bb594fca38a7884d0365d176c77e"} Feb 19 08:06:28 crc kubenswrapper[5023]: I0219 08:06:28.970981 5023 generic.go:334] "Generic (PLEG): container finished" podID="eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627" containerID="0c1aedb375c8c2f968aa202336b3c0608d2a16b07420be246e60c5ab22d11e6b" exitCode=0 Feb 19 08:06:28 crc kubenswrapper[5023]: I0219 08:06:28.971890 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkmgb" event={"ID":"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627","Type":"ContainerDied","Data":"0c1aedb375c8c2f968aa202336b3c0608d2a16b07420be246e60c5ab22d11e6b"} Feb 19 08:06:28 crc kubenswrapper[5023]: I0219 08:06:28.974471 5023 generic.go:334] "Generic (PLEG): container finished" podID="e0d2964c-4c2f-4c86-bcf9-a5e574c18629" containerID="b1fb3b86065b6bfb2872e9c06579dbc82b579d8cadc09dcced2a49492316f1ca" exitCode=0 Feb 19 08:06:28 crc kubenswrapper[5023]: I0219 08:06:28.974828 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqk9z" event={"ID":"e0d2964c-4c2f-4c86-bcf9-a5e574c18629","Type":"ContainerDied","Data":"b1fb3b86065b6bfb2872e9c06579dbc82b579d8cadc09dcced2a49492316f1ca"} Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.303125 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gf9zh"] Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.304724 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.306669 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.310541 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf9zh"] Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.404240 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-catalog-content\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.404298 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6q2n\" (UniqueName: \"kubernetes.io/projected/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-kube-api-access-s6q2n\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.404333 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-utilities\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.507177 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-catalog-content\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.507250 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6q2n\" (UniqueName: \"kubernetes.io/projected/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-kube-api-access-s6q2n\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.507324 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-utilities\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.509750 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-utilities\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.510039 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-catalog-content\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.514505 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hsgr7"] Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.515539 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.526380 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.540263 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsgr7"] Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.549716 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6q2n\" (UniqueName: \"kubernetes.io/projected/cb3df312-4ed1-4b2c-bfb0-52328b896bdc-kube-api-access-s6q2n\") pod \"redhat-marketplace-gf9zh\" (UID: \"cb3df312-4ed1-4b2c-bfb0-52328b896bdc\") " pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.608444 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-catalog-content\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.608555 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-utilities\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.608594 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfzg\" (UniqueName: \"kubernetes.io/projected/ba7c1033-62a2-4d63-b198-075622e7f90c-kube-api-access-dtfzg\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.657681 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.710046 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-utilities\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.710107 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtfzg\" (UniqueName: \"kubernetes.io/projected/ba7c1033-62a2-4d63-b198-075622e7f90c-kube-api-access-dtfzg\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.710141 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-catalog-content\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.710568 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-catalog-content\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.710741 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7c1033-62a2-4d63-b198-075622e7f90c-utilities\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.727304 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtfzg\" (UniqueName: \"kubernetes.io/projected/ba7c1033-62a2-4d63-b198-075622e7f90c-kube-api-access-dtfzg\") pod \"redhat-operators-hsgr7\" (UID: \"ba7c1033-62a2-4d63-b198-075622e7f90c\") " pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.839521 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.983863 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkmgb" event={"ID":"eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627","Type":"ContainerStarted","Data":"249745684c6940147d883db1a211dc4e94409d742a2b00ce09cd337441cdabef"} Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.985662 5023 generic.go:334] "Generic (PLEG): container finished" podID="e0d2964c-4c2f-4c86-bcf9-a5e574c18629" containerID="90c56d2d721f4e342dc20ddf82e6c21f77eb45b6e4fd9992528ace428ef220c5" exitCode=0 Feb 19 08:06:29 crc kubenswrapper[5023]: I0219 08:06:29.985703 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqk9z" event={"ID":"e0d2964c-4c2f-4c86-bcf9-a5e574c18629","Type":"ContainerDied","Data":"90c56d2d721f4e342dc20ddf82e6c21f77eb45b6e4fd9992528ace428ef220c5"} Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.001551 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tkmgb" podStartSLOduration=2.609293154 podStartE2EDuration="4.0015061s" podCreationTimestamp="2026-02-19 08:06:26 +0000 UTC" firstStartedPulling="2026-02-19 08:06:27.962652264 +0000 UTC m=+345.619771212" lastFinishedPulling="2026-02-19 08:06:29.35486521 +0000 UTC m=+347.011984158" observedRunningTime="2026-02-19 08:06:30.000875423 +0000 UTC m=+347.657994371" watchObservedRunningTime="2026-02-19 08:06:30.0015061 +0000 UTC m=+347.658625058" Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.042152 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf9zh"] Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.218230 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsgr7"] Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.990999 5023 generic.go:334] "Generic (PLEG): container finished" podID="ba7c1033-62a2-4d63-b198-075622e7f90c" containerID="923519ed96efe089a458b52aa40bcc4e739d6293969ee56d9edd7704d306b8d7" exitCode=0 Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.991084 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsgr7" event={"ID":"ba7c1033-62a2-4d63-b198-075622e7f90c","Type":"ContainerDied","Data":"923519ed96efe089a458b52aa40bcc4e739d6293969ee56d9edd7704d306b8d7"} Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.991140 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsgr7" event={"ID":"ba7c1033-62a2-4d63-b198-075622e7f90c","Type":"ContainerStarted","Data":"87fe7d3cfddec6cb40255c66849f356934c61bf3efe8eb8f0192f14735fdbed8"} Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.994113 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqk9z" event={"ID":"e0d2964c-4c2f-4c86-bcf9-a5e574c18629","Type":"ContainerStarted","Data":"2fe7e8f9c4e0ab75ebb1d460e9654f9f9fdc0fb23f5716d64a8abdf72a520ef7"} Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.995707 5023 generic.go:334] "Generic (PLEG): container finished" podID="cb3df312-4ed1-4b2c-bfb0-52328b896bdc" containerID="68ba69d1e14168e4e9a484948eed1d293155a99a3e2726372ee94d6d00edbb48" exitCode=0 Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.995754 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf9zh" event={"ID":"cb3df312-4ed1-4b2c-bfb0-52328b896bdc","Type":"ContainerDied","Data":"68ba69d1e14168e4e9a484948eed1d293155a99a3e2726372ee94d6d00edbb48"} Feb 19 08:06:30 crc kubenswrapper[5023]: I0219 08:06:30.995804 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf9zh" event={"ID":"cb3df312-4ed1-4b2c-bfb0-52328b896bdc","Type":"ContainerStarted","Data":"4668a568d023c1f1445c4470b4ade8c7d7a7a07c4d4a72e93180a13edc8a69cd"} Feb 19 08:06:31 crc kubenswrapper[5023]: I0219 08:06:31.034987 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqk9z" podStartSLOduration=2.44932828 podStartE2EDuration="4.03496388s" podCreationTimestamp="2026-02-19 08:06:27 +0000 UTC" firstStartedPulling="2026-02-19 08:06:28.976766571 +0000 UTC m=+346.633885519" lastFinishedPulling="2026-02-19 08:06:30.562402151 +0000 UTC m=+348.219521119" observedRunningTime="2026-02-19 08:06:31.030564929 +0000 UTC m=+348.687683877" watchObservedRunningTime="2026-02-19 08:06:31.03496388 +0000 UTC m=+348.692082858" Feb 19 08:06:32 crc kubenswrapper[5023]: I0219 08:06:32.005394 5023 generic.go:334] "Generic (PLEG): container finished" podID="cb3df312-4ed1-4b2c-bfb0-52328b896bdc" containerID="8f65b7c4c077b67eb9c3dfd7870c8171a93efe76e534e258edef5fca9dc45f40" exitCode=0 Feb 19 08:06:32 crc kubenswrapper[5023]: I0219 08:06:32.005484 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf9zh" event={"ID":"cb3df312-4ed1-4b2c-bfb0-52328b896bdc","Type":"ContainerDied","Data":"8f65b7c4c077b67eb9c3dfd7870c8171a93efe76e534e258edef5fca9dc45f40"} Feb 19 08:06:32 crc kubenswrapper[5023]: I0219 08:06:32.008121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsgr7" event={"ID":"ba7c1033-62a2-4d63-b198-075622e7f90c","Type":"ContainerStarted","Data":"5c8a7ece5644b6e4b6676f92345b9eeaaff91c16428e353c71e12cc8e4ca588e"} Feb 19 08:06:33 crc kubenswrapper[5023]: I0219 08:06:33.015945 5023 generic.go:334] "Generic (PLEG): container finished" podID="ba7c1033-62a2-4d63-b198-075622e7f90c" containerID="5c8a7ece5644b6e4b6676f92345b9eeaaff91c16428e353c71e12cc8e4ca588e" exitCode=0 Feb 19 08:06:33 crc kubenswrapper[5023]: I0219 08:06:33.016034 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsgr7" event={"ID":"ba7c1033-62a2-4d63-b198-075622e7f90c","Type":"ContainerDied","Data":"5c8a7ece5644b6e4b6676f92345b9eeaaff91c16428e353c71e12cc8e4ca588e"} Feb 19 08:06:33 crc kubenswrapper[5023]: I0219 08:06:33.019722 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf9zh" event={"ID":"cb3df312-4ed1-4b2c-bfb0-52328b896bdc","Type":"ContainerStarted","Data":"648d7651decfb75e47b48d8d7df0778ba3ec269f13a8b762b444ddec55a7c002"} Feb 19 08:06:33 crc kubenswrapper[5023]: I0219 08:06:33.049869 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gf9zh" podStartSLOduration=2.6655129779999998 podStartE2EDuration="4.049842737s" podCreationTimestamp="2026-02-19 08:06:29 +0000 UTC" firstStartedPulling="2026-02-19 08:06:30.996763448 +0000 UTC m=+348.653882406" lastFinishedPulling="2026-02-19 08:06:32.381093217 +0000 UTC m=+350.038212165" observedRunningTime="2026-02-19 08:06:33.046426363 +0000 UTC m=+350.703545321" watchObservedRunningTime="2026-02-19 08:06:33.049842737 +0000 UTC m=+350.706961685" Feb 19 08:06:34 crc kubenswrapper[5023]: I0219 08:06:34.033330 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsgr7" event={"ID":"ba7c1033-62a2-4d63-b198-075622e7f90c","Type":"ContainerStarted","Data":"9133f8210f6570f2a2a676220d07c6c7a3e26b2c0a4766402858cb29c6cca013"} Feb 19 08:06:34 crc kubenswrapper[5023]: I0219 08:06:34.059767 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hsgr7" podStartSLOduration=2.6675361029999998 podStartE2EDuration="5.059740978s" podCreationTimestamp="2026-02-19 08:06:29 +0000 UTC" firstStartedPulling="2026-02-19 08:06:30.992355317 +0000 UTC m=+348.649474265" lastFinishedPulling="2026-02-19 08:06:33.384560192 +0000 UTC m=+351.041679140" observedRunningTime="2026-02-19 08:06:34.058268847 +0000 UTC m=+351.715387795" watchObservedRunningTime="2026-02-19 08:06:34.059740978 +0000 UTC m=+351.716859926" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.234853 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.235663 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.277239 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.443600 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.443664 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:37 crc kubenswrapper[5023]: I0219 08:06:37.487517 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:38 crc kubenswrapper[5023]: I0219 08:06:38.092595 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqk9z" Feb 19 08:06:38 crc kubenswrapper[5023]: I0219 08:06:38.094210 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tkmgb" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.658362 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.659152 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.713120 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.840365 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.840406 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:39 crc kubenswrapper[5023]: I0219 08:06:39.877495 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:40 crc kubenswrapper[5023]: I0219 08:06:40.115979 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gf9zh" Feb 19 08:06:40 crc kubenswrapper[5023]: I0219 08:06:40.127691 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hsgr7" Feb 19 08:06:41 crc kubenswrapper[5023]: I0219 08:06:41.870149 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:06:41 crc kubenswrapper[5023]: I0219 08:06:41.870217 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:06:43 crc kubenswrapper[5023]: I0219 08:06:43.710977 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgsl7"] Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.712514 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.735658 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgsl7"] Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849230 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-trusted-ca\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849282 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-bound-sa-token\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849362 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ff5ac73-6923-4e83-809b-b5730734b445-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849448 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-registry-certificates\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849548 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgn8\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-kube-api-access-qdgn8\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849583 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-registry-tls\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.849772 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ff5ac73-6923-4e83-809b-b5730734b445-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.890167 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951630 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-registry-certificates\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951701 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgn8\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-kube-api-access-qdgn8\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951725 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-registry-tls\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951756 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ff5ac73-6923-4e83-809b-b5730734b445-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951787 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-trusted-ca\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951806 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-bound-sa-token\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.951827 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ff5ac73-6923-4e83-809b-b5730734b445-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.952634 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ff5ac73-6923-4e83-809b-b5730734b445-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.953089 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-trusted-ca\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.953223 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ff5ac73-6923-4e83-809b-b5730734b445-registry-certificates\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.962251 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ff5ac73-6923-4e83-809b-b5730734b445-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.963143 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-registry-tls\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.968281 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-bound-sa-token\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:43.969031 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgn8\" (UniqueName: \"kubernetes.io/projected/9ff5ac73-6923-4e83-809b-b5730734b445-kube-api-access-qdgn8\") pod \"image-registry-66df7c8f76-qgsl7\" (UID: \"9ff5ac73-6923-4e83-809b-b5730734b445\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:44.027702 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:44 crc kubenswrapper[5023]: I0219 08:06:44.774790 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgsl7"] Feb 19 08:06:44 crc kubenswrapper[5023]: W0219 08:06:44.789138 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff5ac73_6923_4e83_809b_b5730734b445.slice/crio-641dfa5a7078ad3a643b0b288898605b5779357fae0a0cdbcafce8fee9303597 WatchSource:0}: Error finding container 641dfa5a7078ad3a643b0b288898605b5779357fae0a0cdbcafce8fee9303597: Status 404 returned error can't find the container with id 641dfa5a7078ad3a643b0b288898605b5779357fae0a0cdbcafce8fee9303597 Feb 19 08:06:45 crc kubenswrapper[5023]: I0219 08:06:45.100579 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" event={"ID":"9ff5ac73-6923-4e83-809b-b5730734b445","Type":"ContainerStarted","Data":"38e860534574cd134cf3c2e1d1d59767fc0c2d4c6bbd4ec09476c3ab9fc63a84"} Feb 19 08:06:45 crc kubenswrapper[5023]: I0219 08:06:45.100846 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" event={"ID":"9ff5ac73-6923-4e83-809b-b5730734b445","Type":"ContainerStarted","Data":"641dfa5a7078ad3a643b0b288898605b5779357fae0a0cdbcafce8fee9303597"} Feb 19 08:06:45 crc kubenswrapper[5023]: I0219 08:06:45.100867 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:06:45 crc kubenswrapper[5023]: I0219 08:06:45.120419 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" podStartSLOduration=2.120395951 podStartE2EDuration="2.120395951s" podCreationTimestamp="2026-02-19 08:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:06:45.117695397 +0000 UTC m=+362.774814365" watchObservedRunningTime="2026-02-19 08:06:45.120395951 +0000 UTC m=+362.777514919" Feb 19 08:07:04 crc kubenswrapper[5023]: I0219 08:07:04.035987 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qgsl7" Feb 19 08:07:04 crc kubenswrapper[5023]: I0219 08:07:04.108944 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:07:11 crc kubenswrapper[5023]: I0219 08:07:11.870113 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:07:11 crc kubenswrapper[5023]: I0219 08:07:11.870919 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.149322 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" podUID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" containerName="registry" containerID="cri-o://93690281830da2190097f46acec109c7978c61a85724e7ae3e9e8af570260431" gracePeriod=30 Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.393755 5023 generic.go:334] "Generic (PLEG): container finished" podID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" containerID="93690281830da2190097f46acec109c7978c61a85724e7ae3e9e8af570260431" exitCode=0 Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.393849 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" event={"ID":"a9b9ec2c-86d2-40e9-b7bb-e7af21612798","Type":"ContainerDied","Data":"93690281830da2190097f46acec109c7978c61a85724e7ae3e9e8af570260431"} Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.525317 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603263 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603312 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603488 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603518 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603620 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603716 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8l9f\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603734 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.603755 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.604481 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.604680 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.610557 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f" (OuterVolumeSpecName: "kube-api-access-k8l9f") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "kube-api-access-k8l9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.611575 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.613034 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: E0219 08:07:29.613046 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:a9b9ec2c-86d2-40e9-b7bb-e7af21612798 nodeName:}" failed. No retries permitted until 2026-02-19 08:07:30.113021088 +0000 UTC m=+407.770140036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "registry-storage" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.613079 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.618502 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705387 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8l9f\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-kube-api-access-k8l9f\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705713 5023 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705732 5023 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705743 5023 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705755 5023 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705765 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:29 crc kubenswrapper[5023]: I0219 08:07:29.705775 5023 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a9b9ec2c-86d2-40e9-b7bb-e7af21612798-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.213783 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\" (UID: \"a9b9ec2c-86d2-40e9-b7bb-e7af21612798\") " Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.221566 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a9b9ec2c-86d2-40e9-b7bb-e7af21612798" (UID: "a9b9ec2c-86d2-40e9-b7bb-e7af21612798"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.400729 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" event={"ID":"a9b9ec2c-86d2-40e9-b7bb-e7af21612798","Type":"ContainerDied","Data":"1bcae242847527734a508338c062cd36da787cf5df4496c0f1dfe4c086069b8e"} Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.400809 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rndqk" Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.400815 5023 scope.go:117] "RemoveContainer" containerID="93690281830da2190097f46acec109c7978c61a85724e7ae3e9e8af570260431" Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.456372 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:07:30 crc kubenswrapper[5023]: I0219 08:07:30.459403 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rndqk"] Feb 19 08:07:31 crc kubenswrapper[5023]: I0219 08:07:31.489531 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" path="/var/lib/kubelet/pods/a9b9ec2c-86d2-40e9-b7bb-e7af21612798/volumes" Feb 19 08:07:41 crc kubenswrapper[5023]: I0219 08:07:41.870384 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:07:41 crc kubenswrapper[5023]: I0219 08:07:41.871190 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:07:41 crc kubenswrapper[5023]: I0219 08:07:41.871271 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:07:41 crc kubenswrapper[5023]: I0219 08:07:41.872301 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:07:41 crc kubenswrapper[5023]: I0219 08:07:41.872390 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1" gracePeriod=600 Feb 19 08:07:42 crc kubenswrapper[5023]: I0219 08:07:42.484343 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1" exitCode=0 Feb 19 08:07:42 crc kubenswrapper[5023]: I0219 08:07:42.484403 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1"} Feb 19 08:07:42 crc kubenswrapper[5023]: I0219 08:07:42.485007 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d"} Feb 19 08:07:42 crc kubenswrapper[5023]: I0219 08:07:42.485031 5023 scope.go:117] "RemoveContainer" containerID="f3d35e3bf5501b18344630c8ffaa95b82f50dd4d5070d4a4416877c582fd9676" Feb 19 08:09:44 crc kubenswrapper[5023]: I0219 08:09:44.863206 5023 scope.go:117] "RemoveContainer" containerID="a5159fd7803be180e4de8f1769d878ccad847bd851af067b98dd7c97ade98b01" Feb 19 08:09:44 crc kubenswrapper[5023]: I0219 08:09:44.894407 5023 scope.go:117] "RemoveContainer" containerID="4181226703db5e57deb6948468a4142403311d56e86d23ec5269b627489ed360" Feb 19 08:10:11 crc kubenswrapper[5023]: I0219 08:10:11.870194 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:10:11 crc kubenswrapper[5023]: I0219 08:10:11.870946 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:10:41 crc kubenswrapper[5023]: I0219 08:10:41.870239 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:10:41 crc kubenswrapper[5023]: I0219 08:10:41.871143 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.621276 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27"] Feb 19 08:10:49 crc kubenswrapper[5023]: E0219 08:10:49.622776 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" containerName="registry" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.622793 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" containerName="registry" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.622947 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b9ec2c-86d2-40e9-b7bb-e7af21612798" containerName="registry" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.623743 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.625686 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.637089 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27"] Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.824942 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.824992 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.825255 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdkc\" (UniqueName: \"kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.927537 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.927610 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.927681 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdkc\" (UniqueName: \"kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.928129 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.928213 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:49 crc kubenswrapper[5023]: I0219 08:10:49.950510 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdkc\" (UniqueName: \"kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:50 crc kubenswrapper[5023]: I0219 08:10:50.237531 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:50 crc kubenswrapper[5023]: I0219 08:10:50.427764 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27"] Feb 19 08:10:50 crc kubenswrapper[5023]: I0219 08:10:50.686980 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerStarted","Data":"a1987322f7d8e672a657a87f28669018a7922f98e9602a735333ba9f7302a090"} Feb 19 08:10:50 crc kubenswrapper[5023]: I0219 08:10:50.688392 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerStarted","Data":"30e770ca0126fda83ae862c818e6cd508869118a5b798ec0ac5747cb2451bea0"} Feb 19 08:10:51 crc kubenswrapper[5023]: I0219 08:10:51.696400 5023 generic.go:334] "Generic (PLEG): container finished" podID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerID="a1987322f7d8e672a657a87f28669018a7922f98e9602a735333ba9f7302a090" exitCode=0 Feb 19 08:10:51 crc kubenswrapper[5023]: I0219 08:10:51.696534 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerDied","Data":"a1987322f7d8e672a657a87f28669018a7922f98e9602a735333ba9f7302a090"} Feb 19 08:10:51 crc kubenswrapper[5023]: I0219 08:10:51.698883 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:10:53 crc kubenswrapper[5023]: I0219 08:10:53.713191 5023 generic.go:334] "Generic (PLEG): container finished" podID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerID="29d61b3c236f99242c8fa2a9fe3bfc979166c830af11414586b995fe1e39144d" exitCode=0 Feb 19 08:10:53 crc kubenswrapper[5023]: I0219 08:10:53.713285 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerDied","Data":"29d61b3c236f99242c8fa2a9fe3bfc979166c830af11414586b995fe1e39144d"} Feb 19 08:10:54 crc kubenswrapper[5023]: I0219 08:10:54.720745 5023 generic.go:334] "Generic (PLEG): container finished" podID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerID="aa6500fedf5ca6ed03b96efec2894ef58c20d1f5bfcab222ec397a3f00c810c0" exitCode=0 Feb 19 08:10:54 crc kubenswrapper[5023]: I0219 08:10:54.720808 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerDied","Data":"aa6500fedf5ca6ed03b96efec2894ef58c20d1f5bfcab222ec397a3f00c810c0"} Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:55.999754 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.006434 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle\") pod \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.006713 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util\") pod \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.006791 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdkc\" (UniqueName: \"kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc\") pod \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\" (UID: \"dafb8755-d116-4ada-8f8a-4b16ed12b6a1\") " Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.009242 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle" (OuterVolumeSpecName: "bundle") pod "dafb8755-d116-4ada-8f8a-4b16ed12b6a1" (UID: "dafb8755-d116-4ada-8f8a-4b16ed12b6a1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.015841 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc" (OuterVolumeSpecName: "kube-api-access-lpdkc") pod "dafb8755-d116-4ada-8f8a-4b16ed12b6a1" (UID: "dafb8755-d116-4ada-8f8a-4b16ed12b6a1"). InnerVolumeSpecName "kube-api-access-lpdkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.099978 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util" (OuterVolumeSpecName: "util") pod "dafb8755-d116-4ada-8f8a-4b16ed12b6a1" (UID: "dafb8755-d116-4ada-8f8a-4b16ed12b6a1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.109154 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.109193 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.109213 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpdkc\" (UniqueName: \"kubernetes.io/projected/dafb8755-d116-4ada-8f8a-4b16ed12b6a1-kube-api-access-lpdkc\") on node \"crc\" DevicePath \"\"" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.734495 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" event={"ID":"dafb8755-d116-4ada-8f8a-4b16ed12b6a1","Type":"ContainerDied","Data":"30e770ca0126fda83ae862c818e6cd508869118a5b798ec0ac5747cb2451bea0"} Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.734532 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27" Feb 19 08:10:56 crc kubenswrapper[5023]: I0219 08:10:56.734534 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e770ca0126fda83ae862c818e6cd508869118a5b798ec0ac5747cb2451bea0" Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.594747 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrqg4"] Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595449 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-controller" containerID="cri-o://2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595636 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="sbdb" containerID="cri-o://7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595681 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="nbdb" containerID="cri-o://f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595718 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="northd" containerID="cri-o://be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595753 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595795 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-node" containerID="cri-o://b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.595835 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-acl-logging" containerID="cri-o://2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" gracePeriod=30 Feb 19 08:11:00 crc kubenswrapper[5023]: I0219 08:11:00.635487 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" containerID="cri-o://04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" gracePeriod=30 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.631582 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/3.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.634378 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovn-acl-logging/0.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.634922 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovn-controller/0.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.635355 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675467 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675542 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675589 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675639 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wtn\" (UniqueName: \"kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675677 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675729 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675779 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675797 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675834 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675852 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675873 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675930 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.675963 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676000 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676025 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676046 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676155 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676181 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676204 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log\") pod \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\" (UID: \"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48\") " Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676192 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676222 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676238 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676251 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676498 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.676522 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.693882 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.693957 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.693987 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694010 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket" (OuterVolumeSpecName: "log-socket") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694362 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694399 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694427 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash" (OuterVolumeSpecName: "host-slash") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694782 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694849 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694879 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.694888 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log" (OuterVolumeSpecName: "node-log") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.696914 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn" (OuterVolumeSpecName: "kube-api-access-c2wtn") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "kube-api-access-c2wtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.700250 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.736006 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" (UID: "cd9177d9-fb83-4fdf-bc43-c8cc552e8e48"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.758718 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wpjht"] Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759120 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-acl-logging" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759148 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-acl-logging" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759169 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kubecfg-setup" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759183 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kubecfg-setup" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759195 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759205 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759214 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="nbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759223 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="nbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759240 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="extract" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759249 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="extract" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759263 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759273 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759283 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759294 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759303 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759311 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759322 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="util" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759331 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="util" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759342 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="sbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759350 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="sbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759360 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759369 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759381 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="pull" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759389 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="pull" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759399 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="northd" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759407 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="northd" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759418 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759426 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759439 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-node" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759449 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-node" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759588 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-acl-logging" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759607 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759635 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovn-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759650 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759660 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759670 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759681 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafb8755-d116-4ada-8f8a-4b16ed12b6a1" containerName="extract" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759691 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759703 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="northd" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759714 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="kube-rbac-proxy-node" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759723 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="nbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759734 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="sbdb" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.759886 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.759897 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.760039 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerName="ovnkube-controller" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.763438 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.769981 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/2.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.770447 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/1.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.770488 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4610eec-5318-4742-b598-a88feb94cf7d" containerID="53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400" exitCode=2 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.770542 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerDied","Data":"53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.770595 5023 scope.go:117] "RemoveContainer" containerID="89800d1f1b59e8d54d7d22379c18e8efe2c0ccb5a385bc1d5538ac2a2e612cf2" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.771260 5023 scope.go:117] "RemoveContainer" containerID="53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400" Feb 19 08:11:01 crc kubenswrapper[5023]: E0219 08:11:01.771565 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t9v9m_openshift-multus(c4610eec-5318-4742-b598-a88feb94cf7d)\"" pod="openshift-multus/multus-t9v9m" podUID="c4610eec-5318-4742-b598-a88feb94cf7d" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776683 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776720 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-netns\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776751 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tczb\" (UniqueName: \"kubernetes.io/projected/f438ebcf-a018-4bde-b2d2-62eb28b7764d-kube-api-access-7tczb\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776780 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-systemd-units\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776800 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-netd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776821 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-env-overrides\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776846 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-bin\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776881 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776905 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-node-log\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776942 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-systemd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.776975 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-script-lib\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777001 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-log-socket\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777027 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-var-lib-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777051 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-ovn\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777093 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777115 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-slash\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777139 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777162 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-config\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777191 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-etc-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777215 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-kubelet\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777265 5023 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777279 5023 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777291 5023 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777303 5023 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777315 5023 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777328 5023 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777341 5023 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777353 5023 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777361 5023 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777370 5023 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777381 5023 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777390 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2wtn\" (UniqueName: \"kubernetes.io/projected/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-kube-api-access-c2wtn\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777400 5023 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777410 5023 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777419 5023 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777431 5023 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777443 5023 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777452 5023 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777461 5023 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.777470 5023 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.795238 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovnkube-controller/3.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.804053 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovn-acl-logging/0.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.806476 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrqg4_cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/ovn-controller/0.log" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808205 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808252 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808261 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808270 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808277 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808287 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" exitCode=0 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808295 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" exitCode=143 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808306 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" exitCode=143 Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808336 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808374 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808387 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808397 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808407 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808419 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808434 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808453 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808459 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808465 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808471 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808477 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808471 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.808482 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811670 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811695 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811704 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811721 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811741 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811749 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811755 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811760 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811766 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811771 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811776 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811781 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811787 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811792 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811800 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811810 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811818 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811824 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811833 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811839 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811846 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811853 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811860 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811866 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811874 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811882 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrqg4" event={"ID":"cd9177d9-fb83-4fdf-bc43-c8cc552e8e48","Type":"ContainerDied","Data":"b0af1be7e998ebbb197246f6208508faa6925dd3cce41a15fb1cadf6d88df52a"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811890 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811897 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811902 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811908 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811913 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811918 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811925 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811930 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811936 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.811941 5023 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880575 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880607 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-config\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880650 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-etc-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880666 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-kubelet\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880687 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880712 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-netns\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880736 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tczb\" (UniqueName: \"kubernetes.io/projected/f438ebcf-a018-4bde-b2d2-62eb28b7764d-kube-api-access-7tczb\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880760 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-env-overrides\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880777 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-systemd-units\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880795 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-netd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880816 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-bin\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880822 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880858 5023 scope.go:117] "RemoveContainer" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880875 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880833 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880917 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-etc-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880944 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-kubelet\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880973 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-node-log\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881001 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-systemd-units\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881042 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-netns\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881064 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-run-ovn-kubernetes\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.880941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-node-log\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881221 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-systemd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881262 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-script-lib\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881324 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-log-socket\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881395 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-var-lib-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881434 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-ovn\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881479 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881508 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-slash\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881578 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-config\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881610 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-log-socket\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881582 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-var-lib-openvswitch\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881659 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-slash\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881664 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-ovn\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881091 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-bin\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881069 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-host-cni-netd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881722 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f438ebcf-a018-4bde-b2d2-62eb28b7764d-run-systemd\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.881980 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-env-overrides\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.882432 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovnkube-script-lib\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.889053 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f438ebcf-a018-4bde-b2d2-62eb28b7764d-ovn-node-metrics-cert\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.915899 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.934938 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tczb\" (UniqueName: \"kubernetes.io/projected/f438ebcf-a018-4bde-b2d2-62eb28b7764d-kube-api-access-7tczb\") pod \"ovnkube-node-wpjht\" (UID: \"f438ebcf-a018-4bde-b2d2-62eb28b7764d\") " pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.956414 5023 scope.go:117] "RemoveContainer" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.961913 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrqg4"] Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.976785 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrqg4"] Feb 19 08:11:01 crc kubenswrapper[5023]: I0219 08:11:01.983938 5023 scope.go:117] "RemoveContainer" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.004879 5023 scope.go:117] "RemoveContainer" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.020545 5023 scope.go:117] "RemoveContainer" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.037942 5023 scope.go:117] "RemoveContainer" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.054919 5023 scope.go:117] "RemoveContainer" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.079022 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.086759 5023 scope.go:117] "RemoveContainer" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.134109 5023 scope.go:117] "RemoveContainer" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.189968 5023 scope.go:117] "RemoveContainer" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.191322 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": container with ID starting with 04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2 not found: ID does not exist" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.191358 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} err="failed to get container status \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": rpc error: code = NotFound desc = could not find container \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": container with ID starting with 04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.191384 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.194770 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": container with ID starting with 182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986 not found: ID does not exist" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.194838 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} err="failed to get container status \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": rpc error: code = NotFound desc = could not find container \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": container with ID starting with 182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.194876 5023 scope.go:117] "RemoveContainer" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.195221 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": container with ID starting with 7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf not found: ID does not exist" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.195265 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} err="failed to get container status \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": rpc error: code = NotFound desc = could not find container \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": container with ID starting with 7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.195299 5023 scope.go:117] "RemoveContainer" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.197935 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": container with ID starting with f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b not found: ID does not exist" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.197978 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} err="failed to get container status \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": rpc error: code = NotFound desc = could not find container \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": container with ID starting with f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.198008 5023 scope.go:117] "RemoveContainer" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.199319 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": container with ID starting with be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de not found: ID does not exist" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199348 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} err="failed to get container status \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": rpc error: code = NotFound desc = could not find container \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": container with ID starting with be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199364 5023 scope.go:117] "RemoveContainer" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.199641 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": container with ID starting with 9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9 not found: ID does not exist" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199662 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} err="failed to get container status \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": rpc error: code = NotFound desc = could not find container \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": container with ID starting with 9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199677 5023 scope.go:117] "RemoveContainer" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.199938 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": container with ID starting with b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be not found: ID does not exist" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199958 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} err="failed to get container status \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": rpc error: code = NotFound desc = could not find container \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": container with ID starting with b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.199973 5023 scope.go:117] "RemoveContainer" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.200232 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": container with ID starting with 2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e not found: ID does not exist" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.200255 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} err="failed to get container status \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": rpc error: code = NotFound desc = could not find container \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": container with ID starting with 2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.200267 5023 scope.go:117] "RemoveContainer" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.202578 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": container with ID starting with 2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3 not found: ID does not exist" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.202600 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} err="failed to get container status \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": rpc error: code = NotFound desc = could not find container \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": container with ID starting with 2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.202632 5023 scope.go:117] "RemoveContainer" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: E0219 08:11:02.205526 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": container with ID starting with 7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c not found: ID does not exist" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.205550 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} err="failed to get container status \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": rpc error: code = NotFound desc = could not find container \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": container with ID starting with 7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.205566 5023 scope.go:117] "RemoveContainer" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206309 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} err="failed to get container status \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": rpc error: code = NotFound desc = could not find container \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": container with ID starting with 04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206330 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206578 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} err="failed to get container status \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": rpc error: code = NotFound desc = could not find container \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": container with ID starting with 182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206594 5023 scope.go:117] "RemoveContainer" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206870 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} err="failed to get container status \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": rpc error: code = NotFound desc = could not find container \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": container with ID starting with 7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.206886 5023 scope.go:117] "RemoveContainer" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207129 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} err="failed to get container status \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": rpc error: code = NotFound desc = could not find container \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": container with ID starting with f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207145 5023 scope.go:117] "RemoveContainer" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207387 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} err="failed to get container status \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": rpc error: code = NotFound desc = could not find container \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": container with ID starting with be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207403 5023 scope.go:117] "RemoveContainer" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207655 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} err="failed to get container status \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": rpc error: code = NotFound desc = could not find container \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": container with ID starting with 9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.207673 5023 scope.go:117] "RemoveContainer" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.210019 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} err="failed to get container status \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": rpc error: code = NotFound desc = could not find container \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": container with ID starting with b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.210094 5023 scope.go:117] "RemoveContainer" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.213695 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} err="failed to get container status \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": rpc error: code = NotFound desc = could not find container \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": container with ID starting with 2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.213724 5023 scope.go:117] "RemoveContainer" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.217694 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} err="failed to get container status \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": rpc error: code = NotFound desc = could not find container \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": container with ID starting with 2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.217729 5023 scope.go:117] "RemoveContainer" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.220763 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} err="failed to get container status \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": rpc error: code = NotFound desc = could not find container \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": container with ID starting with 7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.220791 5023 scope.go:117] "RemoveContainer" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.221361 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} err="failed to get container status \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": rpc error: code = NotFound desc = could not find container \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": container with ID starting with 04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.221410 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.222245 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} err="failed to get container status \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": rpc error: code = NotFound desc = could not find container \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": container with ID starting with 182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.222265 5023 scope.go:117] "RemoveContainer" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.226041 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} err="failed to get container status \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": rpc error: code = NotFound desc = could not find container \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": container with ID starting with 7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.226067 5023 scope.go:117] "RemoveContainer" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.230033 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} err="failed to get container status \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": rpc error: code = NotFound desc = could not find container \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": container with ID starting with f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.230062 5023 scope.go:117] "RemoveContainer" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.231929 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} err="failed to get container status \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": rpc error: code = NotFound desc = could not find container \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": container with ID starting with be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.231956 5023 scope.go:117] "RemoveContainer" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.234839 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} err="failed to get container status \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": rpc error: code = NotFound desc = could not find container \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": container with ID starting with 9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.234868 5023 scope.go:117] "RemoveContainer" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.235866 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} err="failed to get container status \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": rpc error: code = NotFound desc = could not find container \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": container with ID starting with b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.235886 5023 scope.go:117] "RemoveContainer" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.242756 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} err="failed to get container status \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": rpc error: code = NotFound desc = could not find container \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": container with ID starting with 2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.242801 5023 scope.go:117] "RemoveContainer" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.246695 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} err="failed to get container status \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": rpc error: code = NotFound desc = could not find container \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": container with ID starting with 2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.246719 5023 scope.go:117] "RemoveContainer" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.250445 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} err="failed to get container status \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": rpc error: code = NotFound desc = could not find container \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": container with ID starting with 7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.250462 5023 scope.go:117] "RemoveContainer" containerID="04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.250873 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2"} err="failed to get container status \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": rpc error: code = NotFound desc = could not find container \"04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2\": container with ID starting with 04e16690b527ce35487d0d0c7fc7db6d21dd51b9566794f72ff54b8296ed41c2 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.250940 5023 scope.go:117] "RemoveContainer" containerID="182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.252157 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986"} err="failed to get container status \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": rpc error: code = NotFound desc = could not find container \"182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986\": container with ID starting with 182bbbb2f590da0c212d014a85ced6e500dcc8bbf917381d60caefa4362d6986 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.252182 5023 scope.go:117] "RemoveContainer" containerID="7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.254804 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf"} err="failed to get container status \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": rpc error: code = NotFound desc = could not find container \"7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf\": container with ID starting with 7db841b730daa53f08e314c958e522046a6e1eb9b8dd80f929e1850367bbebcf not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.254837 5023 scope.go:117] "RemoveContainer" containerID="f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.256548 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b"} err="failed to get container status \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": rpc error: code = NotFound desc = could not find container \"f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b\": container with ID starting with f389fe6c5e1486c4f456d7cb347a95a0d08c8809c9b95cf34d602b9a7ecc363b not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.256571 5023 scope.go:117] "RemoveContainer" containerID="be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.257544 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de"} err="failed to get container status \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": rpc error: code = NotFound desc = could not find container \"be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de\": container with ID starting with be78492394f7bdc703f009e09eedb4f9adf22724e0d06ffe0c4adc14f341e7de not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.257587 5023 scope.go:117] "RemoveContainer" containerID="9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.258824 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9"} err="failed to get container status \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": rpc error: code = NotFound desc = could not find container \"9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9\": container with ID starting with 9f55c51dfa6ac6df7e770f10186c18d7bad89d9ae8d2cb990146e454315683a9 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.258852 5023 scope.go:117] "RemoveContainer" containerID="b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.259334 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be"} err="failed to get container status \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": rpc error: code = NotFound desc = could not find container \"b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be\": container with ID starting with b476bf788b657719114d3b0fc33c287586e7df1d8c78b5b4188a643fb9d409be not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.259366 5023 scope.go:117] "RemoveContainer" containerID="2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.261216 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e"} err="failed to get container status \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": rpc error: code = NotFound desc = could not find container \"2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e\": container with ID starting with 2369ae8e1cc6d44bf4f73203906a6b11b239cf2235ca6045f37d39b9990b6a5e not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.261244 5023 scope.go:117] "RemoveContainer" containerID="2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.268795 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3"} err="failed to get container status \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": rpc error: code = NotFound desc = could not find container \"2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3\": container with ID starting with 2ecdb95278e8e3de40b9e8773cfb1d4da15de78a1f1b43b1cfe70b19a1b15cf3 not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.268865 5023 scope.go:117] "RemoveContainer" containerID="7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.273748 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c"} err="failed to get container status \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": rpc error: code = NotFound desc = could not find container \"7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c\": container with ID starting with 7ccc1da9dd9ead4996cef5f3c57a1a224a732081c1ab6c84f32005d2323a3c5c not found: ID does not exist" Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.813645 5023 generic.go:334] "Generic (PLEG): container finished" podID="f438ebcf-a018-4bde-b2d2-62eb28b7764d" containerID="3ffb5844a5e1b87ce7946dc60f98f5648a34129ec6346ecb748cbb5457ea1607" exitCode=0 Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.813716 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerDied","Data":"3ffb5844a5e1b87ce7946dc60f98f5648a34129ec6346ecb748cbb5457ea1607"} Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.814668 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"48a40433cda04f63761094289368bd601cdd76adbdbfa0e931fda4e984f69c42"} Feb 19 08:11:02 crc kubenswrapper[5023]: I0219 08:11:02.816523 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/2.log" Feb 19 08:11:03 crc kubenswrapper[5023]: I0219 08:11:03.484328 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9177d9-fb83-4fdf-bc43-c8cc552e8e48" path="/var/lib/kubelet/pods/cd9177d9-fb83-4fdf-bc43-c8cc552e8e48/volumes" Feb 19 08:11:03 crc kubenswrapper[5023]: I0219 08:11:03.824804 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"5242ad49c3f1d24c136bfca494b65af872372040ad8b7d2b66cfde16f0a1c201"} Feb 19 08:11:03 crc kubenswrapper[5023]: I0219 08:11:03.824843 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"d9e761f6638194e84c8de1ce62b28f57ee2e3af95b149c5ff81a24f3b886810b"} Feb 19 08:11:03 crc kubenswrapper[5023]: I0219 08:11:03.824856 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"d5575c1a61bc7bfd4288e33ab3164328a820669e0122d16e3f7a54f1ee4c5ee1"} Feb 19 08:11:03 crc kubenswrapper[5023]: I0219 08:11:03.824867 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"e4d0adc968f947a207748d68b9a73cf25aab927232c34e759ee5453e5329c6cd"} Feb 19 08:11:04 crc kubenswrapper[5023]: I0219 08:11:04.832922 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"a788cea86dc1f69a73046444163edfda572b3c271b9963cfbf53071110d9fca9"} Feb 19 08:11:04 crc kubenswrapper[5023]: I0219 08:11:04.834079 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"82ca01a466d275d1262a67c894195f9986bc785e74027be2aed4fa0d07649a27"} Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.675038 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84"] Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.676232 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.680395 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.680515 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.681111 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qwbhh" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.853991 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmw7\" (UniqueName: \"kubernetes.io/projected/9ac16bf5-97d2-478b-a915-9f9919ecd59e-kube-api-access-9rmw7\") pod \"obo-prometheus-operator-68bc856cb9-qgn84\" (UID: \"9ac16bf5-97d2-478b-a915-9f9919ecd59e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.869544 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f"] Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.870394 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.872492 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.872575 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fbcgb" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.877717 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk"] Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.878675 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.955232 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rmw7\" (UniqueName: \"kubernetes.io/projected/9ac16bf5-97d2-478b-a915-9f9919ecd59e-kube-api-access-9rmw7\") pod \"obo-prometheus-operator-68bc856cb9-qgn84\" (UID: \"9ac16bf5-97d2-478b-a915-9f9919ecd59e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.975850 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rmw7\" (UniqueName: \"kubernetes.io/projected/9ac16bf5-97d2-478b-a915-9f9919ecd59e-kube-api-access-9rmw7\") pod \"obo-prometheus-operator-68bc856cb9-qgn84\" (UID: \"9ac16bf5-97d2-478b-a915-9f9919ecd59e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:05 crc kubenswrapper[5023]: I0219 08:11:05.991800 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.045547 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(283cb1eee26535736cfb3b75208142b5d94bbbb8c863e2beb9eb7779fd2872d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.045653 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(283cb1eee26535736cfb3b75208142b5d94bbbb8c863e2beb9eb7779fd2872d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.045679 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(283cb1eee26535736cfb3b75208142b5d94bbbb8c863e2beb9eb7779fd2872d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.045720 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(283cb1eee26535736cfb3b75208142b5d94bbbb8c863e2beb9eb7779fd2872d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" podUID="9ac16bf5-97d2-478b-a915-9f9919ecd59e" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.049533 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jghsx"] Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.050222 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.052607 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.054033 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-tn4t7" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.058892 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.059005 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.059102 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.059309 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.160963 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.161038 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.161099 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42fx\" (UniqueName: \"kubernetes.io/projected/abccc29c-4404-4fbf-abec-9046e05e6bc3-kube-api-access-j42fx\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.161136 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.161166 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.161213 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/abccc29c-4404-4fbf-abec-9046e05e6bc3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.166203 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.166217 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c5f372-8b6a-4454-bc6a-0dcda2907ec1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f\" (UID: \"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.166217 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.166537 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b26147b-3c73-4b0d-8810-38d893b67b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk\" (UID: \"4b26147b-3c73-4b0d-8810-38d893b67b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.192495 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.202328 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.226101 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vg2dl"] Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.227014 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.227846 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(fc1f554337477b211256b5fd31c987436829f6f77bd0c225394f14a932b0b702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.227942 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(fc1f554337477b211256b5fd31c987436829f6f77bd0c225394f14a932b0b702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.227973 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(fc1f554337477b211256b5fd31c987436829f6f77bd0c225394f14a932b0b702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.228041 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(fc1f554337477b211256b5fd31c987436829f6f77bd0c225394f14a932b0b702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" podUID="c5c5f372-8b6a-4454-bc6a-0dcda2907ec1" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.229515 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-s55vd" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.233574 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(64bd626c0c1ad00ee41d9d2889c56675feed0de79025bb0733ae9e22f0d150ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.233651 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(64bd626c0c1ad00ee41d9d2889c56675feed0de79025bb0733ae9e22f0d150ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.233681 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(64bd626c0c1ad00ee41d9d2889c56675feed0de79025bb0733ae9e22f0d150ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.233731 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(64bd626c0c1ad00ee41d9d2889c56675feed0de79025bb0733ae9e22f0d150ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" podUID="4b26147b-3c73-4b0d-8810-38d893b67b6b" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.262546 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42fx\" (UniqueName: \"kubernetes.io/projected/abccc29c-4404-4fbf-abec-9046e05e6bc3-kube-api-access-j42fx\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.262659 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/abccc29c-4404-4fbf-abec-9046e05e6bc3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.266538 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/abccc29c-4404-4fbf-abec-9046e05e6bc3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.285378 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42fx\" (UniqueName: \"kubernetes.io/projected/abccc29c-4404-4fbf-abec-9046e05e6bc3-kube-api-access-j42fx\") pod \"observability-operator-59bdc8b94-jghsx\" (UID: \"abccc29c-4404-4fbf-abec-9046e05e6bc3\") " pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.364130 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7dk7\" (UniqueName: \"kubernetes.io/projected/49bbb335-22f1-432d-8508-9575cf6006ac-kube-api-access-s7dk7\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.364730 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/49bbb335-22f1-432d-8508-9575cf6006ac-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.371002 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.397267 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3d1fd1563d7cbbe7174b07451af3a228d8e78d4475f7996429ad8fb3c689736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.397321 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3d1fd1563d7cbbe7174b07451af3a228d8e78d4475f7996429ad8fb3c689736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.397346 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3d1fd1563d7cbbe7174b07451af3a228d8e78d4475f7996429ad8fb3c689736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.397386 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3d1fd1563d7cbbe7174b07451af3a228d8e78d4475f7996429ad8fb3c689736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" podUID="abccc29c-4404-4fbf-abec-9046e05e6bc3" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.466134 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/49bbb335-22f1-432d-8508-9575cf6006ac-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.466202 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7dk7\" (UniqueName: \"kubernetes.io/projected/49bbb335-22f1-432d-8508-9575cf6006ac-kube-api-access-s7dk7\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.466984 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/49bbb335-22f1-432d-8508-9575cf6006ac-openshift-service-ca\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.484065 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7dk7\" (UniqueName: \"kubernetes.io/projected/49bbb335-22f1-432d-8508-9575cf6006ac-kube-api-access-s7dk7\") pod \"perses-operator-5bf474d74f-vg2dl\" (UID: \"49bbb335-22f1-432d-8508-9575cf6006ac\") " pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.548099 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.572501 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(81563e6b4a24acd88c6920e105fe2a7bbe4533eab56e9f46acc60d1cc64ab4f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.572578 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(81563e6b4a24acd88c6920e105fe2a7bbe4533eab56e9f46acc60d1cc64ab4f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.572603 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(81563e6b4a24acd88c6920e105fe2a7bbe4533eab56e9f46acc60d1cc64ab4f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:06 crc kubenswrapper[5023]: E0219 08:11:06.572663 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(81563e6b4a24acd88c6920e105fe2a7bbe4533eab56e9f46acc60d1cc64ab4f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" podUID="49bbb335-22f1-432d-8508-9575cf6006ac" Feb 19 08:11:06 crc kubenswrapper[5023]: I0219 08:11:06.846867 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"bdbd3251052044fd9ce5a71ff295ce936626c5d31c6c17d113ba6854275db479"} Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.861320 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" event={"ID":"f438ebcf-a018-4bde-b2d2-62eb28b7764d","Type":"ContainerStarted","Data":"d1079eb8f53c503e9fc5017d19c2641f1299bc480f3664643ae52df5d03b1d4e"} Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.861725 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.861762 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.892960 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.893521 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:08 crc kubenswrapper[5023]: I0219 08:11:08.902586 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" podStartSLOduration=7.9025694600000005 podStartE2EDuration="7.90256946s" podCreationTimestamp="2026-02-19 08:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:11:08.900094614 +0000 UTC m=+626.557213562" watchObservedRunningTime="2026-02-19 08:11:08.90256946 +0000 UTC m=+626.559688408" Feb 19 08:11:09 crc kubenswrapper[5023]: I0219 08:11:09.866999 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.184048 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84"] Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.184446 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.184859 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.190385 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f"] Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.190454 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.190738 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.197635 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk"] Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.197778 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.198425 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.234783 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vg2dl"] Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.234992 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.235777 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.245472 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jghsx"] Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.245742 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:10 crc kubenswrapper[5023]: I0219 08:11:10.246409 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.249890 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(cb8e1e6f83c69c026c09c2a967302b83faa0f81165e4a60a5ba15584d5ce0e41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.249990 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(cb8e1e6f83c69c026c09c2a967302b83faa0f81165e4a60a5ba15584d5ce0e41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.250027 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(cb8e1e6f83c69c026c09c2a967302b83faa0f81165e4a60a5ba15584d5ce0e41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.250096 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(cb8e1e6f83c69c026c09c2a967302b83faa0f81165e4a60a5ba15584d5ce0e41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" podUID="c5c5f372-8b6a-4454-bc6a-0dcda2907ec1" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.259525 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(ebb7110589cab5c3ed92ad06e972cfe0e152f1227c5907926bfed5cb9f0f3806): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.259605 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(ebb7110589cab5c3ed92ad06e972cfe0e152f1227c5907926bfed5cb9f0f3806): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.259687 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(ebb7110589cab5c3ed92ad06e972cfe0e152f1227c5907926bfed5cb9f0f3806): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.259743 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(ebb7110589cab5c3ed92ad06e972cfe0e152f1227c5907926bfed5cb9f0f3806): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" podUID="9ac16bf5-97d2-478b-a915-9f9919ecd59e" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.288999 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(738ffc20ba039860a27bae054afdd9c9e41924a3203206ad70c90df0ebac9e30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.289095 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(738ffc20ba039860a27bae054afdd9c9e41924a3203206ad70c90df0ebac9e30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.289121 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(738ffc20ba039860a27bae054afdd9c9e41924a3203206ad70c90df0ebac9e30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.289195 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(738ffc20ba039860a27bae054afdd9c9e41924a3203206ad70c90df0ebac9e30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" podUID="4b26147b-3c73-4b0d-8810-38d893b67b6b" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.305257 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(7ec38d23ee6b81863f1f2b1c9890d0a7be42d7ba0de516700d1d236f4eaec25e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.305345 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(7ec38d23ee6b81863f1f2b1c9890d0a7be42d7ba0de516700d1d236f4eaec25e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.305373 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(7ec38d23ee6b81863f1f2b1c9890d0a7be42d7ba0de516700d1d236f4eaec25e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.305435 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(7ec38d23ee6b81863f1f2b1c9890d0a7be42d7ba0de516700d1d236f4eaec25e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" podUID="abccc29c-4404-4fbf-abec-9046e05e6bc3" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.311108 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(69ca6312d62eb712cbf4615dcc6fb9732b2ae738d9d74a53de93fbce50d0dcb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.311157 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(69ca6312d62eb712cbf4615dcc6fb9732b2ae738d9d74a53de93fbce50d0dcb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.311177 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(69ca6312d62eb712cbf4615dcc6fb9732b2ae738d9d74a53de93fbce50d0dcb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:10 crc kubenswrapper[5023]: E0219 08:11:10.311216 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(69ca6312d62eb712cbf4615dcc6fb9732b2ae738d9d74a53de93fbce50d0dcb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" podUID="49bbb335-22f1-432d-8508-9575cf6006ac" Feb 19 08:11:11 crc kubenswrapper[5023]: I0219 08:11:11.870292 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:11:11 crc kubenswrapper[5023]: I0219 08:11:11.870608 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:11:11 crc kubenswrapper[5023]: I0219 08:11:11.870674 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:11:11 crc kubenswrapper[5023]: I0219 08:11:11.871279 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:11:11 crc kubenswrapper[5023]: I0219 08:11:11.871329 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d" gracePeriod=600 Feb 19 08:11:12 crc kubenswrapper[5023]: I0219 08:11:12.886230 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d" exitCode=0 Feb 19 08:11:12 crc kubenswrapper[5023]: I0219 08:11:12.886286 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d"} Feb 19 08:11:12 crc kubenswrapper[5023]: I0219 08:11:12.886555 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001"} Feb 19 08:11:12 crc kubenswrapper[5023]: I0219 08:11:12.886574 5023 scope.go:117] "RemoveContainer" containerID="e6fcc29396710781a5391009cb3d9d68a134c79958bcaa1a8f708e34f123e5a1" Feb 19 08:11:13 crc kubenswrapper[5023]: I0219 08:11:13.480309 5023 scope.go:117] "RemoveContainer" containerID="53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400" Feb 19 08:11:13 crc kubenswrapper[5023]: E0219 08:11:13.480881 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t9v9m_openshift-multus(c4610eec-5318-4742-b598-a88feb94cf7d)\"" pod="openshift-multus/multus-t9v9m" podUID="c4610eec-5318-4742-b598-a88feb94cf7d" Feb 19 08:11:21 crc kubenswrapper[5023]: I0219 08:11:21.475865 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:21 crc kubenswrapper[5023]: I0219 08:11:21.475905 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:21 crc kubenswrapper[5023]: I0219 08:11:21.476780 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:21 crc kubenswrapper[5023]: I0219 08:11:21.476875 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.521899 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(7f3f89a1fb4ef1db2709b925a74cd67b42de1040ff07dcb75f1ad22c5b9b13d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.521977 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(7f3f89a1fb4ef1db2709b925a74cd67b42de1040ff07dcb75f1ad22c5b9b13d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.522008 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(7f3f89a1fb4ef1db2709b925a74cd67b42de1040ff07dcb75f1ad22c5b9b13d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.522072 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators(4b26147b-3c73-4b0d-8810-38d893b67b6b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_openshift-operators_4b26147b-3c73-4b0d-8810-38d893b67b6b_0(7f3f89a1fb4ef1db2709b925a74cd67b42de1040ff07dcb75f1ad22c5b9b13d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" podUID="4b26147b-3c73-4b0d-8810-38d893b67b6b" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.526653 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3674e51436300f986e1d3edc6969d64464b5ee05aea26f79db5151d2ac2bbcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.526747 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3674e51436300f986e1d3edc6969d64464b5ee05aea26f79db5151d2ac2bbcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.526772 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3674e51436300f986e1d3edc6969d64464b5ee05aea26f79db5151d2ac2bbcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:21 crc kubenswrapper[5023]: E0219 08:11:21.526842 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-jghsx_openshift-operators(abccc29c-4404-4fbf-abec-9046e05e6bc3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jghsx_openshift-operators_abccc29c-4404-4fbf-abec-9046e05e6bc3_0(f3674e51436300f986e1d3edc6969d64464b5ee05aea26f79db5151d2ac2bbcc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" podUID="abccc29c-4404-4fbf-abec-9046e05e6bc3" Feb 19 08:11:22 crc kubenswrapper[5023]: I0219 08:11:22.475930 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:22 crc kubenswrapper[5023]: I0219 08:11:22.476610 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:22 crc kubenswrapper[5023]: E0219 08:11:22.517156 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(277985f1542a11bcb3a94a413d7b908a0b6f4bd3ff9cbb63f8e11ec19f0bdb53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:22 crc kubenswrapper[5023]: E0219 08:11:22.517254 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(277985f1542a11bcb3a94a413d7b908a0b6f4bd3ff9cbb63f8e11ec19f0bdb53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:22 crc kubenswrapper[5023]: E0219 08:11:22.517302 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(277985f1542a11bcb3a94a413d7b908a0b6f4bd3ff9cbb63f8e11ec19f0bdb53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:22 crc kubenswrapper[5023]: E0219 08:11:22.517371 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators(9ac16bf5-97d2-478b-a915-9f9919ecd59e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qgn84_openshift-operators_9ac16bf5-97d2-478b-a915-9f9919ecd59e_0(277985f1542a11bcb3a94a413d7b908a0b6f4bd3ff9cbb63f8e11ec19f0bdb53): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" podUID="9ac16bf5-97d2-478b-a915-9f9919ecd59e" Feb 19 08:11:24 crc kubenswrapper[5023]: I0219 08:11:24.476719 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:24 crc kubenswrapper[5023]: I0219 08:11:24.477390 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:24 crc kubenswrapper[5023]: E0219 08:11:24.512243 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(c48ca34573f9f5d2ba7e903152ca1552633abeca0cd64688996414b1ac117638): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:24 crc kubenswrapper[5023]: E0219 08:11:24.512353 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(c48ca34573f9f5d2ba7e903152ca1552633abeca0cd64688996414b1ac117638): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:24 crc kubenswrapper[5023]: E0219 08:11:24.512383 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(c48ca34573f9f5d2ba7e903152ca1552633abeca0cd64688996414b1ac117638): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:24 crc kubenswrapper[5023]: E0219 08:11:24.512444 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-vg2dl_openshift-operators(49bbb335-22f1-432d-8508-9575cf6006ac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-vg2dl_openshift-operators_49bbb335-22f1-432d-8508-9575cf6006ac_0(c48ca34573f9f5d2ba7e903152ca1552633abeca0cd64688996414b1ac117638): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" podUID="49bbb335-22f1-432d-8508-9575cf6006ac" Feb 19 08:11:25 crc kubenswrapper[5023]: I0219 08:11:25.476995 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:25 crc kubenswrapper[5023]: I0219 08:11:25.478087 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:25 crc kubenswrapper[5023]: E0219 08:11:25.511061 5023 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(f9a2914081150c552a1c5c8ed8e8c30811ac1c0f4e0188340fb4dae01650b4b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 08:11:25 crc kubenswrapper[5023]: E0219 08:11:25.511456 5023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(f9a2914081150c552a1c5c8ed8e8c30811ac1c0f4e0188340fb4dae01650b4b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:25 crc kubenswrapper[5023]: E0219 08:11:25.511478 5023 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(f9a2914081150c552a1c5c8ed8e8c30811ac1c0f4e0188340fb4dae01650b4b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:25 crc kubenswrapper[5023]: E0219 08:11:25.511532 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators(c5c5f372-8b6a-4454-bc6a-0dcda2907ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_openshift-operators_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1_0(f9a2914081150c552a1c5c8ed8e8c30811ac1c0f4e0188340fb4dae01650b4b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" podUID="c5c5f372-8b6a-4454-bc6a-0dcda2907ec1" Feb 19 08:11:27 crc kubenswrapper[5023]: I0219 08:11:27.477507 5023 scope.go:117] "RemoveContainer" containerID="53f82719807858d3252130c7af753083dc16b9fef14657edc2ba546952e32400" Feb 19 08:11:27 crc kubenswrapper[5023]: I0219 08:11:27.981294 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t9v9m_c4610eec-5318-4742-b598-a88feb94cf7d/kube-multus/2.log" Feb 19 08:11:27 crc kubenswrapper[5023]: I0219 08:11:27.981584 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t9v9m" event={"ID":"c4610eec-5318-4742-b598-a88feb94cf7d","Type":"ContainerStarted","Data":"f8830624989b123818c69a48f737b6561a005da7de956206153b6dddaf12fe59"} Feb 19 08:11:32 crc kubenswrapper[5023]: I0219 08:11:32.102213 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wpjht" Feb 19 08:11:34 crc kubenswrapper[5023]: I0219 08:11:34.476882 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:34 crc kubenswrapper[5023]: I0219 08:11:34.477489 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" Feb 19 08:11:34 crc kubenswrapper[5023]: I0219 08:11:34.911356 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84"] Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.028473 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" event={"ID":"9ac16bf5-97d2-478b-a915-9f9919ecd59e","Type":"ContainerStarted","Data":"7e9e9596652a163666cc9fabd6385960ae7b0a16f653dfbfd6283f168dc91d75"} Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.476113 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.476404 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.476429 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.476614 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.664030 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk"] Feb 19 08:11:35 crc kubenswrapper[5023]: W0219 08:11:35.672578 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b26147b_3c73_4b0d_8810_38d893b67b6b.slice/crio-b502506449c4972de5bddc87a2e8b7e094b7337193447cddb7e968e3bd32546f WatchSource:0}: Error finding container b502506449c4972de5bddc87a2e8b7e094b7337193447cddb7e968e3bd32546f: Status 404 returned error can't find the container with id b502506449c4972de5bddc87a2e8b7e094b7337193447cddb7e968e3bd32546f Feb 19 08:11:35 crc kubenswrapper[5023]: I0219 08:11:35.723447 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jghsx"] Feb 19 08:11:35 crc kubenswrapper[5023]: W0219 08:11:35.726691 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabccc29c_4404_4fbf_abec_9046e05e6bc3.slice/crio-78dc224cac6dfd046e2836c660c6d0d69456079dfd9a9891877c55e4d5ea5c4e WatchSource:0}: Error finding container 78dc224cac6dfd046e2836c660c6d0d69456079dfd9a9891877c55e4d5ea5c4e: Status 404 returned error can't find the container with id 78dc224cac6dfd046e2836c660c6d0d69456079dfd9a9891877c55e4d5ea5c4e Feb 19 08:11:36 crc kubenswrapper[5023]: I0219 08:11:36.036134 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" event={"ID":"4b26147b-3c73-4b0d-8810-38d893b67b6b","Type":"ContainerStarted","Data":"b502506449c4972de5bddc87a2e8b7e094b7337193447cddb7e968e3bd32546f"} Feb 19 08:11:36 crc kubenswrapper[5023]: I0219 08:11:36.037520 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" event={"ID":"abccc29c-4404-4fbf-abec-9046e05e6bc3","Type":"ContainerStarted","Data":"78dc224cac6dfd046e2836c660c6d0d69456079dfd9a9891877c55e4d5ea5c4e"} Feb 19 08:11:38 crc kubenswrapper[5023]: I0219 08:11:38.476148 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:38 crc kubenswrapper[5023]: I0219 08:11:38.477401 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" Feb 19 08:11:39 crc kubenswrapper[5023]: I0219 08:11:39.476754 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:39 crc kubenswrapper[5023]: I0219 08:11:39.477206 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:40 crc kubenswrapper[5023]: I0219 08:11:40.758601 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-vg2dl"] Feb 19 08:11:40 crc kubenswrapper[5023]: I0219 08:11:40.805911 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f"] Feb 19 08:11:40 crc kubenswrapper[5023]: W0219 08:11:40.810731 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c5f372_8b6a_4454_bc6a_0dcda2907ec1.slice/crio-a5a14b9a349384730db7e3e4b45432319ca203c25e7d09448a87d87290e3ca2a WatchSource:0}: Error finding container a5a14b9a349384730db7e3e4b45432319ca203c25e7d09448a87d87290e3ca2a: Status 404 returned error can't find the container with id a5a14b9a349384730db7e3e4b45432319ca203c25e7d09448a87d87290e3ca2a Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.069246 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" event={"ID":"49bbb335-22f1-432d-8508-9575cf6006ac","Type":"ContainerStarted","Data":"39a22d05dcfa06135a413f11fb6a9e78019f7492115ae61066d6b2c01cf41842"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.071236 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" event={"ID":"9ac16bf5-97d2-478b-a915-9f9919ecd59e","Type":"ContainerStarted","Data":"cf5c9b6a6e20afe61c309949445885e6a295776ba1d14a132c47005a6b74f4e3"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.072875 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" event={"ID":"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1","Type":"ContainerStarted","Data":"a403d191e7d34813640e7bd7894b06ac02fc92661f7095bffcf1ce230272529b"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.072913 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" event={"ID":"c5c5f372-8b6a-4454-bc6a-0dcda2907ec1","Type":"ContainerStarted","Data":"a5a14b9a349384730db7e3e4b45432319ca203c25e7d09448a87d87290e3ca2a"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.074697 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" event={"ID":"abccc29c-4404-4fbf-abec-9046e05e6bc3","Type":"ContainerStarted","Data":"6521b4293c8782873585e123425cb8d04977e5a015a96fb9d9cc1346c5ec96c2"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.074896 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.076032 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" event={"ID":"4b26147b-3c73-4b0d-8810-38d893b67b6b","Type":"ContainerStarted","Data":"9b29e8958e8281988a2d573067abcfe12943902c07bd4a1092851b75ed81d1cf"} Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.088827 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qgn84" podStartSLOduration=30.444030029 podStartE2EDuration="36.088810275s" podCreationTimestamp="2026-02-19 08:11:05 +0000 UTC" firstStartedPulling="2026-02-19 08:11:34.912485088 +0000 UTC m=+652.569604036" lastFinishedPulling="2026-02-19 08:11:40.557265294 +0000 UTC m=+658.214384282" observedRunningTime="2026-02-19 08:11:41.087770918 +0000 UTC m=+658.744889866" watchObservedRunningTime="2026-02-19 08:11:41.088810275 +0000 UTC m=+658.745929223" Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.116172 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" podStartSLOduration=30.25066441 podStartE2EDuration="35.116151499s" podCreationTimestamp="2026-02-19 08:11:06 +0000 UTC" firstStartedPulling="2026-02-19 08:11:35.729422307 +0000 UTC m=+653.386541255" lastFinishedPulling="2026-02-19 08:11:40.594909396 +0000 UTC m=+658.252028344" observedRunningTime="2026-02-19 08:11:41.112317039 +0000 UTC m=+658.769435997" watchObservedRunningTime="2026-02-19 08:11:41.116151499 +0000 UTC m=+658.773270457" Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.121873 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-jghsx" Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.134038 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f" podStartSLOduration=36.134005885 podStartE2EDuration="36.134005885s" podCreationTimestamp="2026-02-19 08:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:11:41.130849232 +0000 UTC m=+658.787968180" watchObservedRunningTime="2026-02-19 08:11:41.134005885 +0000 UTC m=+658.791124843" Feb 19 08:11:41 crc kubenswrapper[5023]: I0219 08:11:41.153175 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk" podStartSLOduration=31.278654349 podStartE2EDuration="36.153155104s" podCreationTimestamp="2026-02-19 08:11:05 +0000 UTC" firstStartedPulling="2026-02-19 08:11:35.674510434 +0000 UTC m=+653.331629382" lastFinishedPulling="2026-02-19 08:11:40.549011189 +0000 UTC m=+658.206130137" observedRunningTime="2026-02-19 08:11:41.150950407 +0000 UTC m=+658.808069365" watchObservedRunningTime="2026-02-19 08:11:41.153155104 +0000 UTC m=+658.810274072" Feb 19 08:11:44 crc kubenswrapper[5023]: I0219 08:11:44.093938 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" event={"ID":"49bbb335-22f1-432d-8508-9575cf6006ac","Type":"ContainerStarted","Data":"06e799210ea4cbe96cc7f78acf0af3280ffe67579f08b4eb191a46da771bd71b"} Feb 19 08:11:44 crc kubenswrapper[5023]: I0219 08:11:44.094475 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:44 crc kubenswrapper[5023]: I0219 08:11:44.115009 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" podStartSLOduration=35.731817035 podStartE2EDuration="38.114988526s" podCreationTimestamp="2026-02-19 08:11:06 +0000 UTC" firstStartedPulling="2026-02-19 08:11:40.772584613 +0000 UTC m=+658.429703561" lastFinishedPulling="2026-02-19 08:11:43.155756114 +0000 UTC m=+660.812875052" observedRunningTime="2026-02-19 08:11:44.114111623 +0000 UTC m=+661.771230581" watchObservedRunningTime="2026-02-19 08:11:44.114988526 +0000 UTC m=+661.772107484" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.677472 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n"] Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.679039 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.688462 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n"] Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.695650 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.754896 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.754980 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvtw\" (UniqueName: \"kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.755024 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.855808 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvtw\" (UniqueName: \"kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.855874 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.855896 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.856296 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.856335 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:47 crc kubenswrapper[5023]: I0219 08:11:47.877759 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvtw\" (UniqueName: \"kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:48 crc kubenswrapper[5023]: I0219 08:11:48.006105 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:48 crc kubenswrapper[5023]: I0219 08:11:48.430837 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n"] Feb 19 08:11:48 crc kubenswrapper[5023]: W0219 08:11:48.450860 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b16c33_02d5_4371_91f6_e2d137b49df6.slice/crio-d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533 WatchSource:0}: Error finding container d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533: Status 404 returned error can't find the container with id d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533 Feb 19 08:11:49 crc kubenswrapper[5023]: I0219 08:11:49.121430 5023 generic.go:334] "Generic (PLEG): container finished" podID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerID="1c42764da77a7f261aed8d205f558ad6aa3a96cf77c015c60a5ecb59d871ac61" exitCode=0 Feb 19 08:11:49 crc kubenswrapper[5023]: I0219 08:11:49.121499 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" event={"ID":"96b16c33-02d5-4371-91f6-e2d137b49df6","Type":"ContainerDied","Data":"1c42764da77a7f261aed8d205f558ad6aa3a96cf77c015c60a5ecb59d871ac61"} Feb 19 08:11:49 crc kubenswrapper[5023]: I0219 08:11:49.121526 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" event={"ID":"96b16c33-02d5-4371-91f6-e2d137b49df6","Type":"ContainerStarted","Data":"d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533"} Feb 19 08:11:51 crc kubenswrapper[5023]: I0219 08:11:51.137359 5023 generic.go:334] "Generic (PLEG): container finished" podID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerID="288ac7e32be5499f409da1ea690e4635de63dc5637f5ecf1bf38b07b5854c2ab" exitCode=0 Feb 19 08:11:51 crc kubenswrapper[5023]: I0219 08:11:51.137549 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" event={"ID":"96b16c33-02d5-4371-91f6-e2d137b49df6","Type":"ContainerDied","Data":"288ac7e32be5499f409da1ea690e4635de63dc5637f5ecf1bf38b07b5854c2ab"} Feb 19 08:11:52 crc kubenswrapper[5023]: I0219 08:11:52.144710 5023 generic.go:334] "Generic (PLEG): container finished" podID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerID="d3ce0e347db0e817adf34d021e481b36938efdc96499cc06d72ad3a9b30685c4" exitCode=0 Feb 19 08:11:52 crc kubenswrapper[5023]: I0219 08:11:52.144828 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" event={"ID":"96b16c33-02d5-4371-91f6-e2d137b49df6","Type":"ContainerDied","Data":"d3ce0e347db0e817adf34d021e481b36938efdc96499cc06d72ad3a9b30685c4"} Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.349276 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.533695 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qvtw\" (UniqueName: \"kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw\") pod \"96b16c33-02d5-4371-91f6-e2d137b49df6\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.533763 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle\") pod \"96b16c33-02d5-4371-91f6-e2d137b49df6\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.533794 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util\") pod \"96b16c33-02d5-4371-91f6-e2d137b49df6\" (UID: \"96b16c33-02d5-4371-91f6-e2d137b49df6\") " Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.534693 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle" (OuterVolumeSpecName: "bundle") pod "96b16c33-02d5-4371-91f6-e2d137b49df6" (UID: "96b16c33-02d5-4371-91f6-e2d137b49df6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.539899 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw" (OuterVolumeSpecName: "kube-api-access-6qvtw") pod "96b16c33-02d5-4371-91f6-e2d137b49df6" (UID: "96b16c33-02d5-4371-91f6-e2d137b49df6"). InnerVolumeSpecName "kube-api-access-6qvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.547852 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util" (OuterVolumeSpecName: "util") pod "96b16c33-02d5-4371-91f6-e2d137b49df6" (UID: "96b16c33-02d5-4371-91f6-e2d137b49df6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.635158 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.635195 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qvtw\" (UniqueName: \"kubernetes.io/projected/96b16c33-02d5-4371-91f6-e2d137b49df6-kube-api-access-6qvtw\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:53 crc kubenswrapper[5023]: I0219 08:11:53.635208 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96b16c33-02d5-4371-91f6-e2d137b49df6-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:11:54 crc kubenswrapper[5023]: I0219 08:11:54.156747 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" event={"ID":"96b16c33-02d5-4371-91f6-e2d137b49df6","Type":"ContainerDied","Data":"d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533"} Feb 19 08:11:54 crc kubenswrapper[5023]: I0219 08:11:54.156789 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d730b83e7dc1115826f81e5c610d7fc5f0f0546f6ec901b9eb618a548bc71533" Feb 19 08:11:54 crc kubenswrapper[5023]: I0219 08:11:54.156793 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n" Feb 19 08:11:56 crc kubenswrapper[5023]: I0219 08:11:56.551352 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-vg2dl" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.770547 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cgwg2"] Feb 19 08:11:59 crc kubenswrapper[5023]: E0219 08:11:59.771883 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="util" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.772107 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="util" Feb 19 08:11:59 crc kubenswrapper[5023]: E0219 08:11:59.772120 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="pull" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.772126 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="pull" Feb 19 08:11:59 crc kubenswrapper[5023]: E0219 08:11:59.772142 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="extract" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.772147 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="extract" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.772275 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b16c33-02d5-4371-91f6-e2d137b49df6" containerName="extract" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.772745 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.774788 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-grnlf" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.774930 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.775939 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.789991 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cgwg2"] Feb 19 08:11:59 crc kubenswrapper[5023]: I0219 08:11:59.924981 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj45w\" (UniqueName: \"kubernetes.io/projected/6180e8c4-c97c-411e-b3a1-2bac8b0afed2-kube-api-access-pj45w\") pod \"nmstate-operator-694c9596b7-cgwg2\" (UID: \"6180e8c4-c97c-411e-b3a1-2bac8b0afed2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" Feb 19 08:12:00 crc kubenswrapper[5023]: I0219 08:12:00.027024 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj45w\" (UniqueName: \"kubernetes.io/projected/6180e8c4-c97c-411e-b3a1-2bac8b0afed2-kube-api-access-pj45w\") pod \"nmstate-operator-694c9596b7-cgwg2\" (UID: \"6180e8c4-c97c-411e-b3a1-2bac8b0afed2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" Feb 19 08:12:00 crc kubenswrapper[5023]: I0219 08:12:00.046937 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj45w\" (UniqueName: \"kubernetes.io/projected/6180e8c4-c97c-411e-b3a1-2bac8b0afed2-kube-api-access-pj45w\") pod \"nmstate-operator-694c9596b7-cgwg2\" (UID: \"6180e8c4-c97c-411e-b3a1-2bac8b0afed2\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" Feb 19 08:12:00 crc kubenswrapper[5023]: I0219 08:12:00.088662 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" Feb 19 08:12:00 crc kubenswrapper[5023]: I0219 08:12:00.702560 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cgwg2"] Feb 19 08:12:00 crc kubenswrapper[5023]: W0219 08:12:00.709862 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6180e8c4_c97c_411e_b3a1_2bac8b0afed2.slice/crio-1c75a1e7bc04589151f6d13f1f525d8fb667a2f7e0d038f82dfcd6512ebef3f6 WatchSource:0}: Error finding container 1c75a1e7bc04589151f6d13f1f525d8fb667a2f7e0d038f82dfcd6512ebef3f6: Status 404 returned error can't find the container with id 1c75a1e7bc04589151f6d13f1f525d8fb667a2f7e0d038f82dfcd6512ebef3f6 Feb 19 08:12:01 crc kubenswrapper[5023]: I0219 08:12:01.195874 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" event={"ID":"6180e8c4-c97c-411e-b3a1-2bac8b0afed2","Type":"ContainerStarted","Data":"1c75a1e7bc04589151f6d13f1f525d8fb667a2f7e0d038f82dfcd6512ebef3f6"} Feb 19 08:12:04 crc kubenswrapper[5023]: I0219 08:12:04.215963 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" event={"ID":"6180e8c4-c97c-411e-b3a1-2bac8b0afed2","Type":"ContainerStarted","Data":"4ab979ae356df136260319981932020cd36232d9bfc9658ad2fc4381ce95a843"} Feb 19 08:12:04 crc kubenswrapper[5023]: I0219 08:12:04.229830 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-cgwg2" podStartSLOduration=2.678074739 podStartE2EDuration="5.229812888s" podCreationTimestamp="2026-02-19 08:11:59 +0000 UTC" firstStartedPulling="2026-02-19 08:12:00.712491381 +0000 UTC m=+678.369610329" lastFinishedPulling="2026-02-19 08:12:03.26422953 +0000 UTC m=+680.921348478" observedRunningTime="2026-02-19 08:12:04.228689769 +0000 UTC m=+681.885808717" watchObservedRunningTime="2026-02-19 08:12:04.229812888 +0000 UTC m=+681.886931836" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.658612 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.660082 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.662242 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fkm79" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.674165 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.693184 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.698342 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.705376 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.722338 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-f9rlh"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.723226 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.733952 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.819781 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.820694 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.822686 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.823761 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.831434 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n"] Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.832657 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-sp997" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.838687 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-nmstate-lock\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.838774 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-dbus-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.838849 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qbn\" (UniqueName: \"kubernetes.io/projected/d9ec14c0-957a-473e-9c95-aa0ced5b523c-kube-api-access-b7qbn\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.838878 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-ovs-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.839033 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfw4\" (UniqueName: \"kubernetes.io/projected/78d642b7-0914-4e8b-840b-7fc5454ddab6-kube-api-access-qrfw4\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.839071 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnm9\" (UniqueName: \"kubernetes.io/projected/abb296fe-0769-478a-ac52-38a1610a8ca8-kube-api-access-xcnm9\") pod \"nmstate-metrics-58c85c668d-4m9fh\" (UID: \"abb296fe-0769-478a-ac52-38a1610a8ca8\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.839102 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/78d642b7-0914-4e8b-840b-7fc5454ddab6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940175 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfw4\" (UniqueName: \"kubernetes.io/projected/78d642b7-0914-4e8b-840b-7fc5454ddab6-kube-api-access-qrfw4\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940221 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcnm9\" (UniqueName: \"kubernetes.io/projected/abb296fe-0769-478a-ac52-38a1610a8ca8-kube-api-access-xcnm9\") pod \"nmstate-metrics-58c85c668d-4m9fh\" (UID: \"abb296fe-0769-478a-ac52-38a1610a8ca8\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940247 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/78d642b7-0914-4e8b-840b-7fc5454ddab6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940268 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-nmstate-lock\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940298 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940347 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-dbus-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940369 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5w4\" (UniqueName: \"kubernetes.io/projected/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-kube-api-access-wr5w4\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940400 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qbn\" (UniqueName: \"kubernetes.io/projected/d9ec14c0-957a-473e-9c95-aa0ced5b523c-kube-api-access-b7qbn\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940488 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-ovs-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940512 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940505 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-nmstate-lock\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940581 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-ovs-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.940789 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d9ec14c0-957a-473e-9c95-aa0ced5b523c-dbus-socket\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.947505 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/78d642b7-0914-4e8b-840b-7fc5454ddab6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.967534 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcnm9\" (UniqueName: \"kubernetes.io/projected/abb296fe-0769-478a-ac52-38a1610a8ca8-kube-api-access-xcnm9\") pod \"nmstate-metrics-58c85c668d-4m9fh\" (UID: \"abb296fe-0769-478a-ac52-38a1610a8ca8\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.972306 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qbn\" (UniqueName: \"kubernetes.io/projected/d9ec14c0-957a-473e-9c95-aa0ced5b523c-kube-api-access-b7qbn\") pod \"nmstate-handler-f9rlh\" (UID: \"d9ec14c0-957a-473e-9c95-aa0ced5b523c\") " pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.976272 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfw4\" (UniqueName: \"kubernetes.io/projected/78d642b7-0914-4e8b-840b-7fc5454ddab6-kube-api-access-qrfw4\") pod \"nmstate-webhook-866bcb46dc-9hdkv\" (UID: \"78d642b7-0914-4e8b-840b-7fc5454ddab6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:08 crc kubenswrapper[5023]: I0219 08:12:08.981223 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.014266 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.029381 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.035044 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.043131 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.043228 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5w4\" (UniqueName: \"kubernetes.io/projected/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-kube-api-access-wr5w4\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.043276 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.044331 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.047293 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.049297 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.053005 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.068377 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5w4\" (UniqueName: \"kubernetes.io/projected/cc7ad06c-3614-4f0d-88ad-1d743499fc9c-kube-api-access-wr5w4\") pod \"nmstate-console-plugin-5c78fc5d65-8jh2n\" (UID: \"cc7ad06c-3614-4f0d-88ad-1d743499fc9c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.134758 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144092 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144147 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5trg\" (UniqueName: \"kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144203 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144226 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144303 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144332 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.144361 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248312 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248357 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5trg\" (UniqueName: \"kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248390 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248418 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248486 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248512 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.248543 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.249701 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.255852 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.258558 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.262881 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.266443 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.267037 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.270419 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f9rlh" event={"ID":"d9ec14c0-957a-473e-9c95-aa0ced5b523c","Type":"ContainerStarted","Data":"cae8cc557b6f21d24c60cb350d16eb6417e8cabb9d2e2d1357bddece3ccf45ac"} Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.282657 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5trg\" (UniqueName: \"kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg\") pod \"console-86c9d74687-pstmq\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.304468 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh"] Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.347859 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv"] Feb 19 08:12:09 crc kubenswrapper[5023]: W0219 08:12:09.353840 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78d642b7_0914_4e8b_840b_7fc5454ddab6.slice/crio-921eb793ea19e4b5094e04baa0ab419bd4377bb687c1aca9dab18939d3db6592 WatchSource:0}: Error finding container 921eb793ea19e4b5094e04baa0ab419bd4377bb687c1aca9dab18939d3db6592: Status 404 returned error can't find the container with id 921eb793ea19e4b5094e04baa0ab419bd4377bb687c1aca9dab18939d3db6592 Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.364108 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.418278 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n"] Feb 19 08:12:09 crc kubenswrapper[5023]: W0219 08:12:09.424446 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc7ad06c_3614_4f0d_88ad_1d743499fc9c.slice/crio-0cc8355fc6437515b79b707a3c9d114c4b56f61761f33b58b27e69f275d1ee83 WatchSource:0}: Error finding container 0cc8355fc6437515b79b707a3c9d114c4b56f61761f33b58b27e69f275d1ee83: Status 404 returned error can't find the container with id 0cc8355fc6437515b79b707a3c9d114c4b56f61761f33b58b27e69f275d1ee83 Feb 19 08:12:09 crc kubenswrapper[5023]: I0219 08:12:09.578062 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.278213 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" event={"ID":"78d642b7-0914-4e8b-840b-7fc5454ddab6","Type":"ContainerStarted","Data":"921eb793ea19e4b5094e04baa0ab419bd4377bb687c1aca9dab18939d3db6592"} Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.279472 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c9d74687-pstmq" event={"ID":"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e","Type":"ContainerStarted","Data":"f9c5afa5644ea6716024114d2753b4be08c0c2e2874e3a8d7cc924d3d1dd316b"} Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.279534 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c9d74687-pstmq" event={"ID":"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e","Type":"ContainerStarted","Data":"48de7cd2e880cf4a008a95030124bae0e688abac351193b88ae317a7adf718e5"} Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.280610 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" event={"ID":"abb296fe-0769-478a-ac52-38a1610a8ca8","Type":"ContainerStarted","Data":"be2bfd8235334b6e8c2adb83666389a430286bff96e0c29e8b9e0fa681bf5208"} Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.281507 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" event={"ID":"cc7ad06c-3614-4f0d-88ad-1d743499fc9c","Type":"ContainerStarted","Data":"0cc8355fc6437515b79b707a3c9d114c4b56f61761f33b58b27e69f275d1ee83"} Feb 19 08:12:10 crc kubenswrapper[5023]: I0219 08:12:10.300194 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86c9d74687-pstmq" podStartSLOduration=1.30017594 podStartE2EDuration="1.30017594s" podCreationTimestamp="2026-02-19 08:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:12:10.296153965 +0000 UTC m=+687.953272913" watchObservedRunningTime="2026-02-19 08:12:10.30017594 +0000 UTC m=+687.957294888" Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.315124 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" event={"ID":"78d642b7-0914-4e8b-840b-7fc5454ddab6","Type":"ContainerStarted","Data":"eac51a1607f5b85baed41e7d07f80257e19c8c547fa51d511f4a6aad07106179"} Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.315756 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.319336 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f9rlh" event={"ID":"d9ec14c0-957a-473e-9c95-aa0ced5b523c","Type":"ContainerStarted","Data":"5be8d26c9d9799a78a03f5965caa260a4123c85d7f593e459f24680db8424994"} Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.319531 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.320796 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" event={"ID":"abb296fe-0769-478a-ac52-38a1610a8ca8","Type":"ContainerStarted","Data":"efb3187e4bedbf469a29a3424575e1ec19cf6f8a4f9ae8a32e01dce72403dc7d"} Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.321967 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" event={"ID":"cc7ad06c-3614-4f0d-88ad-1d743499fc9c","Type":"ContainerStarted","Data":"5cfd1149288a720bd2577789390fd12c63dfefe6ef354ab8b88f4b06d2401356"} Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.338848 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" podStartSLOduration=2.246207292 podStartE2EDuration="7.338822898s" podCreationTimestamp="2026-02-19 08:12:08 +0000 UTC" firstStartedPulling="2026-02-19 08:12:09.359219855 +0000 UTC m=+687.016338803" lastFinishedPulling="2026-02-19 08:12:14.451835441 +0000 UTC m=+692.108954409" observedRunningTime="2026-02-19 08:12:15.334199017 +0000 UTC m=+692.991317965" watchObservedRunningTime="2026-02-19 08:12:15.338822898 +0000 UTC m=+692.995941846" Feb 19 08:12:15 crc kubenswrapper[5023]: I0219 08:12:15.399472 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8jh2n" podStartSLOduration=2.375125707 podStartE2EDuration="7.39945475s" podCreationTimestamp="2026-02-19 08:12:08 +0000 UTC" firstStartedPulling="2026-02-19 08:12:09.427029195 +0000 UTC m=+687.084148143" lastFinishedPulling="2026-02-19 08:12:14.451358228 +0000 UTC m=+692.108477186" observedRunningTime="2026-02-19 08:12:15.396545734 +0000 UTC m=+693.053664682" watchObservedRunningTime="2026-02-19 08:12:15.39945475 +0000 UTC m=+693.056573688" Feb 19 08:12:17 crc kubenswrapper[5023]: I0219 08:12:17.336792 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" event={"ID":"abb296fe-0769-478a-ac52-38a1610a8ca8","Type":"ContainerStarted","Data":"63a69fffc8d87ab90ba100a1a42adaae27fb3a5c1baa1d14d2ea567a4ac00ecb"} Feb 19 08:12:17 crc kubenswrapper[5023]: I0219 08:12:17.358146 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-f9rlh" podStartSLOduration=4.100345908 podStartE2EDuration="9.358120013s" podCreationTimestamp="2026-02-19 08:12:08 +0000 UTC" firstStartedPulling="2026-02-19 08:12:09.096251883 +0000 UTC m=+686.753370831" lastFinishedPulling="2026-02-19 08:12:14.354025978 +0000 UTC m=+692.011144936" observedRunningTime="2026-02-19 08:12:15.41286365 +0000 UTC m=+693.069982608" watchObservedRunningTime="2026-02-19 08:12:17.358120013 +0000 UTC m=+695.015239001" Feb 19 08:12:17 crc kubenswrapper[5023]: I0219 08:12:17.364679 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4m9fh" podStartSLOduration=1.608871128 podStartE2EDuration="9.364656493s" podCreationTimestamp="2026-02-19 08:12:08 +0000 UTC" firstStartedPulling="2026-02-19 08:12:09.313244465 +0000 UTC m=+686.970363413" lastFinishedPulling="2026-02-19 08:12:17.06902983 +0000 UTC m=+694.726148778" observedRunningTime="2026-02-19 08:12:17.359389126 +0000 UTC m=+695.016508074" watchObservedRunningTime="2026-02-19 08:12:17.364656493 +0000 UTC m=+695.021775441" Feb 19 08:12:19 crc kubenswrapper[5023]: I0219 08:12:19.086060 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-f9rlh" Feb 19 08:12:19 crc kubenswrapper[5023]: I0219 08:12:19.365664 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:19 crc kubenswrapper[5023]: I0219 08:12:19.365965 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:19 crc kubenswrapper[5023]: I0219 08:12:19.374341 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:20 crc kubenswrapper[5023]: I0219 08:12:20.366158 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:12:20 crc kubenswrapper[5023]: I0219 08:12:20.437053 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:12:29 crc kubenswrapper[5023]: I0219 08:12:29.025725 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9hdkv" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.618797 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh"] Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.620512 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.622288 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.629079 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh"] Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.781836 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.781921 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7mn\" (UniqueName: \"kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.781985 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.882785 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.883178 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.883325 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg7mn\" (UniqueName: \"kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.883254 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.883732 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.916606 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg7mn\" (UniqueName: \"kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:40 crc kubenswrapper[5023]: I0219 08:12:40.970909 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:41 crc kubenswrapper[5023]: I0219 08:12:41.227083 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh"] Feb 19 08:12:41 crc kubenswrapper[5023]: I0219 08:12:41.505274 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerStarted","Data":"475e995c137f1a0853bf73026d35fc89af6504c739b18f2f27e3dfe52251276d"} Feb 19 08:12:41 crc kubenswrapper[5023]: I0219 08:12:41.505575 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerStarted","Data":"75e915d2e6e362d236995f00a7db36fff8df24d92b2d75a86a429abca39d8f14"} Feb 19 08:12:42 crc kubenswrapper[5023]: I0219 08:12:42.511590 5023 generic.go:334] "Generic (PLEG): container finished" podID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerID="475e995c137f1a0853bf73026d35fc89af6504c739b18f2f27e3dfe52251276d" exitCode=0 Feb 19 08:12:42 crc kubenswrapper[5023]: I0219 08:12:42.511708 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerDied","Data":"475e995c137f1a0853bf73026d35fc89af6504c739b18f2f27e3dfe52251276d"} Feb 19 08:12:44 crc kubenswrapper[5023]: I0219 08:12:44.552713 5023 generic.go:334] "Generic (PLEG): container finished" podID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerID="e3df17806aae516e2b528d14d9ddb5c8c3b046ae326ec29e2d3491f063d78119" exitCode=0 Feb 19 08:12:44 crc kubenswrapper[5023]: I0219 08:12:44.553457 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerDied","Data":"e3df17806aae516e2b528d14d9ddb5c8c3b046ae326ec29e2d3491f063d78119"} Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.489293 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t88r2" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerName="console" containerID="cri-o://1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb" gracePeriod=15 Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.561609 5023 generic.go:334] "Generic (PLEG): container finished" podID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerID="386b23194eddddb258bf42367da12db00d3f5cbab4ad4d9210067f5b77e7fc1b" exitCode=0 Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.561667 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerDied","Data":"386b23194eddddb258bf42367da12db00d3f5cbab4ad4d9210067f5b77e7fc1b"} Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.838169 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t88r2_473d61a9-cdf6-4f1b-9727-ec1f00482f00/console/0.log" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.838589 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965033 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965113 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965184 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965220 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965284 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965318 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.965347 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7r5\" (UniqueName: \"kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5\") pod \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\" (UID: \"473d61a9-cdf6-4f1b-9727-ec1f00482f00\") " Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.966043 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.966067 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config" (OuterVolumeSpecName: "console-config") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.966055 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.966136 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca" (OuterVolumeSpecName: "service-ca") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.970787 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.970820 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5" (OuterVolumeSpecName: "kube-api-access-rn7r5") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "kube-api-access-rn7r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:12:45 crc kubenswrapper[5023]: I0219 08:12:45.971737 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "473d61a9-cdf6-4f1b-9727-ec1f00482f00" (UID: "473d61a9-cdf6-4f1b-9727-ec1f00482f00"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067069 5023 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067104 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067117 5023 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067128 5023 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/473d61a9-cdf6-4f1b-9727-ec1f00482f00-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067138 5023 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067146 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/473d61a9-cdf6-4f1b-9727-ec1f00482f00-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.067153 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn7r5\" (UniqueName: \"kubernetes.io/projected/473d61a9-cdf6-4f1b-9727-ec1f00482f00-kube-api-access-rn7r5\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569521 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t88r2_473d61a9-cdf6-4f1b-9727-ec1f00482f00/console/0.log" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569573 5023 generic.go:334] "Generic (PLEG): container finished" podID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerID="1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb" exitCode=2 Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569653 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t88r2" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569663 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t88r2" event={"ID":"473d61a9-cdf6-4f1b-9727-ec1f00482f00","Type":"ContainerDied","Data":"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb"} Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569747 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t88r2" event={"ID":"473d61a9-cdf6-4f1b-9727-ec1f00482f00","Type":"ContainerDied","Data":"093a7d75fb9fd2a004ae94929b142936d9146d7919cbace7b318fa87e304de72"} Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.569768 5023 scope.go:117] "RemoveContainer" containerID="1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.599216 5023 scope.go:117] "RemoveContainer" containerID="1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb" Feb 19 08:12:46 crc kubenswrapper[5023]: E0219 08:12:46.600612 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb\": container with ID starting with 1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb not found: ID does not exist" containerID="1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.600802 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb"} err="failed to get container status \"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb\": rpc error: code = NotFound desc = could not find container \"1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb\": container with ID starting with 1c01335b44e33c7c26971335626b46880a97555594488b86dec579f033f95bbb not found: ID does not exist" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.616448 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.629009 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t88r2"] Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.879600 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.980934 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle\") pod \"5e684cb3-b258-4828-9438-41f79a2a9bf7\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.980972 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util\") pod \"5e684cb3-b258-4828-9438-41f79a2a9bf7\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.981028 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg7mn\" (UniqueName: \"kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn\") pod \"5e684cb3-b258-4828-9438-41f79a2a9bf7\" (UID: \"5e684cb3-b258-4828-9438-41f79a2a9bf7\") " Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.981860 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle" (OuterVolumeSpecName: "bundle") pod "5e684cb3-b258-4828-9438-41f79a2a9bf7" (UID: "5e684cb3-b258-4828-9438-41f79a2a9bf7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.985270 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn" (OuterVolumeSpecName: "kube-api-access-jg7mn") pod "5e684cb3-b258-4828-9438-41f79a2a9bf7" (UID: "5e684cb3-b258-4828-9438-41f79a2a9bf7"). InnerVolumeSpecName "kube-api-access-jg7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:12:46 crc kubenswrapper[5023]: I0219 08:12:46.990525 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util" (OuterVolumeSpecName: "util") pod "5e684cb3-b258-4828-9438-41f79a2a9bf7" (UID: "5e684cb3-b258-4828-9438-41f79a2a9bf7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.082334 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.082365 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e684cb3-b258-4828-9438-41f79a2a9bf7-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.082377 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg7mn\" (UniqueName: \"kubernetes.io/projected/5e684cb3-b258-4828-9438-41f79a2a9bf7-kube-api-access-jg7mn\") on node \"crc\" DevicePath \"\"" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.489587 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" path="/var/lib/kubelet/pods/473d61a9-cdf6-4f1b-9727-ec1f00482f00/volumes" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.587118 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" event={"ID":"5e684cb3-b258-4828-9438-41f79a2a9bf7","Type":"ContainerDied","Data":"75e915d2e6e362d236995f00a7db36fff8df24d92b2d75a86a429abca39d8f14"} Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.587199 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e915d2e6e362d236995f00a7db36fff8df24d92b2d75a86a429abca39d8f14" Feb 19 08:12:47 crc kubenswrapper[5023]: I0219 08:12:47.587244 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.149219 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm"] Feb 19 08:12:57 crc kubenswrapper[5023]: E0219 08:12:57.149892 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="util" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.149904 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="util" Feb 19 08:12:57 crc kubenswrapper[5023]: E0219 08:12:57.149918 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerName="console" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.149924 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerName="console" Feb 19 08:12:57 crc kubenswrapper[5023]: E0219 08:12:57.149932 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="pull" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.149937 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="pull" Feb 19 08:12:57 crc kubenswrapper[5023]: E0219 08:12:57.149951 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="extract" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.149956 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="extract" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.150058 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e684cb3-b258-4828-9438-41f79a2a9bf7" containerName="extract" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.150067 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="473d61a9-cdf6-4f1b-9727-ec1f00482f00" containerName="console" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.150524 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.153361 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.153563 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.153641 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dtrls" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.154303 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.154542 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.164288 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm"] Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.308282 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-webhook-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.308341 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xckl6\" (UniqueName: \"kubernetes.io/projected/6afd6128-1c17-4490-8b98-52b684318f65-kube-api-access-xckl6\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.308368 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-apiservice-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.361784 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk"] Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.362699 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.366008 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.366255 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.366340 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nx59z" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.374925 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk"] Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.410756 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-apiservice-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.410863 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-webhook-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.410909 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xckl6\" (UniqueName: \"kubernetes.io/projected/6afd6128-1c17-4490-8b98-52b684318f65-kube-api-access-xckl6\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.417691 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-apiservice-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.421693 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6afd6128-1c17-4490-8b98-52b684318f65-webhook-cert\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.437315 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xckl6\" (UniqueName: \"kubernetes.io/projected/6afd6128-1c17-4490-8b98-52b684318f65-kube-api-access-xckl6\") pod \"metallb-operator-controller-manager-744474f4f9-cg2wm\" (UID: \"6afd6128-1c17-4490-8b98-52b684318f65\") " pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.468142 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.512452 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf48f\" (UniqueName: \"kubernetes.io/projected/acdff3eb-f5d2-48f5-bef3-08606374dc4d-kube-api-access-gf48f\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.512753 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-webhook-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.512866 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-apiservice-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.614116 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf48f\" (UniqueName: \"kubernetes.io/projected/acdff3eb-f5d2-48f5-bef3-08606374dc4d-kube-api-access-gf48f\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.614608 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-webhook-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.614698 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-apiservice-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.620299 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-apiservice-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.627338 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/acdff3eb-f5d2-48f5-bef3-08606374dc4d-webhook-cert\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.633239 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf48f\" (UniqueName: \"kubernetes.io/projected/acdff3eb-f5d2-48f5-bef3-08606374dc4d-kube-api-access-gf48f\") pod \"metallb-operator-webhook-server-674976f6cc-f4mpk\" (UID: \"acdff3eb-f5d2-48f5-bef3-08606374dc4d\") " pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.682482 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.737983 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm"] Feb 19 08:12:57 crc kubenswrapper[5023]: W0219 08:12:57.748770 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6afd6128_1c17_4490_8b98_52b684318f65.slice/crio-c990289b95794903f58ead4c68f20d2d38cb88212646551cc0aa15e4096d91d3 WatchSource:0}: Error finding container c990289b95794903f58ead4c68f20d2d38cb88212646551cc0aa15e4096d91d3: Status 404 returned error can't find the container with id c990289b95794903f58ead4c68f20d2d38cb88212646551cc0aa15e4096d91d3 Feb 19 08:12:57 crc kubenswrapper[5023]: I0219 08:12:57.922435 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk"] Feb 19 08:12:57 crc kubenswrapper[5023]: W0219 08:12:57.933669 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdff3eb_f5d2_48f5_bef3_08606374dc4d.slice/crio-4233ce8fc6cf55235846aea5cf172947f2a1ec0628650523db4d52a66482a485 WatchSource:0}: Error finding container 4233ce8fc6cf55235846aea5cf172947f2a1ec0628650523db4d52a66482a485: Status 404 returned error can't find the container with id 4233ce8fc6cf55235846aea5cf172947f2a1ec0628650523db4d52a66482a485 Feb 19 08:12:58 crc kubenswrapper[5023]: I0219 08:12:58.650298 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" event={"ID":"acdff3eb-f5d2-48f5-bef3-08606374dc4d","Type":"ContainerStarted","Data":"4233ce8fc6cf55235846aea5cf172947f2a1ec0628650523db4d52a66482a485"} Feb 19 08:12:58 crc kubenswrapper[5023]: I0219 08:12:58.651188 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" event={"ID":"6afd6128-1c17-4490-8b98-52b684318f65","Type":"ContainerStarted","Data":"c990289b95794903f58ead4c68f20d2d38cb88212646551cc0aa15e4096d91d3"} Feb 19 08:13:03 crc kubenswrapper[5023]: I0219 08:13:03.681469 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" event={"ID":"acdff3eb-f5d2-48f5-bef3-08606374dc4d","Type":"ContainerStarted","Data":"3e553ac390cbcbef1e74eff0667404d07a5dfa37ff0b3e77657ab69b1db88658"} Feb 19 08:13:03 crc kubenswrapper[5023]: I0219 08:13:03.683121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" event={"ID":"6afd6128-1c17-4490-8b98-52b684318f65","Type":"ContainerStarted","Data":"f424aadd641ca6b236a64fb540937661a1bb3c4bef81944d86287dda7a372f4c"} Feb 19 08:13:03 crc kubenswrapper[5023]: I0219 08:13:03.683271 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:13:03 crc kubenswrapper[5023]: I0219 08:13:03.698921 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" podStartSLOduration=2.136077326 podStartE2EDuration="6.698897547s" podCreationTimestamp="2026-02-19 08:12:57 +0000 UTC" firstStartedPulling="2026-02-19 08:12:57.937380455 +0000 UTC m=+735.594499403" lastFinishedPulling="2026-02-19 08:13:02.500200666 +0000 UTC m=+740.157319624" observedRunningTime="2026-02-19 08:13:03.698712292 +0000 UTC m=+741.355831250" watchObservedRunningTime="2026-02-19 08:13:03.698897547 +0000 UTC m=+741.356016495" Feb 19 08:13:03 crc kubenswrapper[5023]: I0219 08:13:03.717448 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" podStartSLOduration=1.985359462 podStartE2EDuration="6.7174298s" podCreationTimestamp="2026-02-19 08:12:57 +0000 UTC" firstStartedPulling="2026-02-19 08:12:57.752818829 +0000 UTC m=+735.409937777" lastFinishedPulling="2026-02-19 08:13:02.484889167 +0000 UTC m=+740.142008115" observedRunningTime="2026-02-19 08:13:03.714819432 +0000 UTC m=+741.371938390" watchObservedRunningTime="2026-02-19 08:13:03.7174298 +0000 UTC m=+741.374548748" Feb 19 08:13:04 crc kubenswrapper[5023]: I0219 08:13:04.690125 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:13:15 crc kubenswrapper[5023]: I0219 08:13:15.857917 5023 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 08:13:17 crc kubenswrapper[5023]: I0219 08:13:17.688034 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-674976f6cc-f4mpk" Feb 19 08:13:37 crc kubenswrapper[5023]: I0219 08:13:37.471654 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-744474f4f9-cg2wm" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.222452 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.223144 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.226829 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vxmlw" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.226832 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.238041 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sffgs"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.240226 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.244795 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.246007 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.299490 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.338012 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tsc67"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.339575 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.342022 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.342277 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.342336 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mtdg2" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.342427 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.342595 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-l6q57"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.343433 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.346037 5023 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.352430 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-l6q57"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370673 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmg8n\" (UniqueName: \"kubernetes.io/projected/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-kube-api-access-lmg8n\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370724 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-startup\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370788 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370810 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-sockets\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370874 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370914 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics-certs\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370958 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-conf\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.370986 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371016 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-metrics-certs\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371032 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-reloader\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371060 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trn64\" (UniqueName: \"kubernetes.io/projected/51b4e594-f586-4108-ad83-8beb7cba09ca-kube-api-access-trn64\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371081 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfc832d4-eeff-4559-b058-2599bb2c9baa-metallb-excludel2\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371113 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkshm\" (UniqueName: \"kubernetes.io/projected/bfc832d4-eeff-4559-b058-2599bb2c9baa-kube-api-access-mkshm\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371128 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-cert\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371178 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.371203 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhls\" (UniqueName: \"kubernetes.io/projected/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-kube-api-access-qjhls\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.471899 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkshm\" (UniqueName: \"kubernetes.io/projected/bfc832d4-eeff-4559-b058-2599bb2c9baa-kube-api-access-mkshm\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.471950 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-cert\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.471986 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472014 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhls\" (UniqueName: \"kubernetes.io/projected/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-kube-api-access-qjhls\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472040 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmg8n\" (UniqueName: \"kubernetes.io/projected/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-kube-api-access-lmg8n\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472067 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-startup\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472097 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472119 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-sockets\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472149 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472183 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics-certs\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472216 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-conf\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472248 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472274 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-metrics-certs\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472295 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-reloader\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472316 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trn64\" (UniqueName: \"kubernetes.io/projected/51b4e594-f586-4108-ad83-8beb7cba09ca-kube-api-access-trn64\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472340 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfc832d4-eeff-4559-b058-2599bb2c9baa-metallb-excludel2\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.472353 5023 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.472369 5023 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.472428 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist podName:bfc832d4-eeff-4559-b058-2599bb2c9baa nodeName:}" failed. No retries permitted until 2026-02-19 08:13:38.972397958 +0000 UTC m=+776.629516906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist") pod "speaker-tsc67" (UID: "bfc832d4-eeff-4559-b058-2599bb2c9baa") : secret "metallb-memberlist" not found Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.472446 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs podName:bfc832d4-eeff-4559-b058-2599bb2c9baa nodeName:}" failed. No retries permitted until 2026-02-19 08:13:38.972439589 +0000 UTC m=+776.629558537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs") pod "speaker-tsc67" (UID: "bfc832d4-eeff-4559-b058-2599bb2c9baa") : secret "speaker-certs-secret" not found Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472647 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-sockets\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.472837 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.473116 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-conf\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.473387 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/51b4e594-f586-4108-ad83-8beb7cba09ca-frr-startup\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.473421 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfc832d4-eeff-4559-b058-2599bb2c9baa-metallb-excludel2\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.473681 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/51b4e594-f586-4108-ad83-8beb7cba09ca-reloader\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.478029 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-cert\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.478190 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51b4e594-f586-4108-ad83-8beb7cba09ca-metrics-certs\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.478222 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-metrics-certs\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.488791 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.490984 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trn64\" (UniqueName: \"kubernetes.io/projected/51b4e594-f586-4108-ad83-8beb7cba09ca-kube-api-access-trn64\") pod \"frr-k8s-sffgs\" (UID: \"51b4e594-f586-4108-ad83-8beb7cba09ca\") " pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.493509 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhls\" (UniqueName: \"kubernetes.io/projected/33eb4f2b-7821-4e6b-a69e-2cda1a6489e8-kube-api-access-qjhls\") pod \"frr-k8s-webhook-server-78b44bf5bb-fm5ph\" (UID: \"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.503313 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmg8n\" (UniqueName: \"kubernetes.io/projected/52cb1a3f-622d-4b75-a16b-05a1b932eeeb-kube-api-access-lmg8n\") pod \"controller-69bbfbf88f-l6q57\" (UID: \"52cb1a3f-622d-4b75-a16b-05a1b932eeeb\") " pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.508357 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkshm\" (UniqueName: \"kubernetes.io/projected/bfc832d4-eeff-4559-b058-2599bb2c9baa-kube-api-access-mkshm\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.542195 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.557405 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.668240 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.728900 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph"] Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.959704 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" event={"ID":"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8","Type":"ContainerStarted","Data":"faff842f7c5b5b748d58d18b7b27e79150d3089ab72eda311dfac54779fb7210"} Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.961442 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"ba77bb53d060906c5865b7647128fc8126c415bd9dc1dcd4d8ae5debec2e8958"} Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.977537 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.977672 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.977729 5023 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 08:13:38 crc kubenswrapper[5023]: E0219 08:13:38.977820 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist podName:bfc832d4-eeff-4559-b058-2599bb2c9baa nodeName:}" failed. No retries permitted until 2026-02-19 08:13:39.977797357 +0000 UTC m=+777.634916315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist") pod "speaker-tsc67" (UID: "bfc832d4-eeff-4559-b058-2599bb2c9baa") : secret "metallb-memberlist" not found Feb 19 08:13:38 crc kubenswrapper[5023]: I0219 08:13:38.983731 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-metrics-certs\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.054329 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-l6q57"] Feb 19 08:13:39 crc kubenswrapper[5023]: W0219 08:13:39.057259 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52cb1a3f_622d_4b75_a16b_05a1b932eeeb.slice/crio-c463f716b2b305d721ce7714420d5737bcbd63edf5d1c00ccd3769c61825a193 WatchSource:0}: Error finding container c463f716b2b305d721ce7714420d5737bcbd63edf5d1c00ccd3769c61825a193: Status 404 returned error can't find the container with id c463f716b2b305d721ce7714420d5737bcbd63edf5d1c00ccd3769c61825a193 Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.970003 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l6q57" event={"ID":"52cb1a3f-622d-4b75-a16b-05a1b932eeeb","Type":"ContainerStarted","Data":"9ddc765c10fe4f5869525c3d075386f41617b486ed31fdb9b5246ac454d17c6e"} Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.970260 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l6q57" event={"ID":"52cb1a3f-622d-4b75-a16b-05a1b932eeeb","Type":"ContainerStarted","Data":"d3c59d9ed3357b9fc72830147e22e5bc9fccf2354b369a1c06df66c4bd5ee547"} Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.970278 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.970288 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l6q57" event={"ID":"52cb1a3f-622d-4b75-a16b-05a1b932eeeb","Type":"ContainerStarted","Data":"c463f716b2b305d721ce7714420d5737bcbd63edf5d1c00ccd3769c61825a193"} Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.991709 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-l6q57" podStartSLOduration=1.991687746 podStartE2EDuration="1.991687746s" podCreationTimestamp="2026-02-19 08:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:13:39.982932383 +0000 UTC m=+777.640051331" watchObservedRunningTime="2026-02-19 08:13:39.991687746 +0000 UTC m=+777.648806694" Feb 19 08:13:39 crc kubenswrapper[5023]: I0219 08:13:39.997736 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.002490 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfc832d4-eeff-4559-b058-2599bb2c9baa-memberlist\") pod \"speaker-tsc67\" (UID: \"bfc832d4-eeff-4559-b058-2599bb2c9baa\") " pod="metallb-system/speaker-tsc67" Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.157087 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tsc67" Feb 19 08:13:40 crc kubenswrapper[5023]: W0219 08:13:40.176005 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc832d4_eeff_4559_b058_2599bb2c9baa.slice/crio-ad78713e5e0642a7ac6e4a80a84518ea5ca419c058b51587af8f79d494dc59fd WatchSource:0}: Error finding container ad78713e5e0642a7ac6e4a80a84518ea5ca419c058b51587af8f79d494dc59fd: Status 404 returned error can't find the container with id ad78713e5e0642a7ac6e4a80a84518ea5ca419c058b51587af8f79d494dc59fd Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.986062 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tsc67" event={"ID":"bfc832d4-eeff-4559-b058-2599bb2c9baa","Type":"ContainerStarted","Data":"77ea0d433cf93a50623c1f815107e5035d5ee2c9946aafcbcffee78e248741b8"} Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.986426 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tsc67" event={"ID":"bfc832d4-eeff-4559-b058-2599bb2c9baa","Type":"ContainerStarted","Data":"4698b9d300ca75b85852a242927eba50ed83d6da6646f8b96d89c257dc4e08f5"} Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.986441 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tsc67" event={"ID":"bfc832d4-eeff-4559-b058-2599bb2c9baa","Type":"ContainerStarted","Data":"ad78713e5e0642a7ac6e4a80a84518ea5ca419c058b51587af8f79d494dc59fd"} Feb 19 08:13:40 crc kubenswrapper[5023]: I0219 08:13:40.986632 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tsc67" Feb 19 08:13:41 crc kubenswrapper[5023]: I0219 08:13:41.005832 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tsc67" podStartSLOduration=3.005812902 podStartE2EDuration="3.005812902s" podCreationTimestamp="2026-02-19 08:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:13:41.00272085 +0000 UTC m=+778.659839808" watchObservedRunningTime="2026-02-19 08:13:41.005812902 +0000 UTC m=+778.662931840" Feb 19 08:13:41 crc kubenswrapper[5023]: I0219 08:13:41.870486 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:13:41 crc kubenswrapper[5023]: I0219 08:13:41.870540 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:13:47 crc kubenswrapper[5023]: I0219 08:13:47.032746 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" event={"ID":"33eb4f2b-7821-4e6b-a69e-2cda1a6489e8","Type":"ContainerStarted","Data":"b7ab8db4bde7103a6058c9422c0ccea399472d31d91d06cad1b5bcc65e52c409"} Feb 19 08:13:47 crc kubenswrapper[5023]: I0219 08:13:47.033575 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:47 crc kubenswrapper[5023]: I0219 08:13:47.035927 5023 generic.go:334] "Generic (PLEG): container finished" podID="51b4e594-f586-4108-ad83-8beb7cba09ca" containerID="41a93b3690502c3a982e522f3983516b4a14558d5c3d7e5ced80a39b010d5248" exitCode=0 Feb 19 08:13:47 crc kubenswrapper[5023]: I0219 08:13:47.036009 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerDied","Data":"41a93b3690502c3a982e522f3983516b4a14558d5c3d7e5ced80a39b010d5248"} Feb 19 08:13:47 crc kubenswrapper[5023]: I0219 08:13:47.054231 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" podStartSLOduration=1.59743076 podStartE2EDuration="9.054208942s" podCreationTimestamp="2026-02-19 08:13:38 +0000 UTC" firstStartedPulling="2026-02-19 08:13:38.739238198 +0000 UTC m=+776.396357146" lastFinishedPulling="2026-02-19 08:13:46.19601638 +0000 UTC m=+783.853135328" observedRunningTime="2026-02-19 08:13:47.048398507 +0000 UTC m=+784.705517455" watchObservedRunningTime="2026-02-19 08:13:47.054208942 +0000 UTC m=+784.711327900" Feb 19 08:13:48 crc kubenswrapper[5023]: I0219 08:13:48.046688 5023 generic.go:334] "Generic (PLEG): container finished" podID="51b4e594-f586-4108-ad83-8beb7cba09ca" containerID="ca38984774595192ab84dacba62c70689c629d0d4a935cab3dfc9570198bce78" exitCode=0 Feb 19 08:13:48 crc kubenswrapper[5023]: I0219 08:13:48.046727 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerDied","Data":"ca38984774595192ab84dacba62c70689c629d0d4a935cab3dfc9570198bce78"} Feb 19 08:13:49 crc kubenswrapper[5023]: I0219 08:13:49.057745 5023 generic.go:334] "Generic (PLEG): container finished" podID="51b4e594-f586-4108-ad83-8beb7cba09ca" containerID="b20c579e1598aa776b2f19fc09b7042854c69dd92ced401acdd95375df6612a9" exitCode=0 Feb 19 08:13:49 crc kubenswrapper[5023]: I0219 08:13:49.057832 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerDied","Data":"b20c579e1598aa776b2f19fc09b7042854c69dd92ced401acdd95375df6612a9"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.072698 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"f21736c88d486885e43820427c0935cca734e55365249fc7b83b28bac37f66fc"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.073161 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"1f9c8cab1b838f120b68d4a739985a046f4df58ec0ad5a392a8182e236f8fd86"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.073176 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"84d8e75c30827b9073e3b41afca7b409abd0cf41c9a87fbbee2efe33eb51bb4b"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.073187 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"55bc99381c4189dded08d3aece103c66a4bc8d50685a0f5d7c921527ff450600"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.073197 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"b0be4ad9fd699891f2bc2a374c9cc7f1b482c8ae82aea5580d529c2163537c9a"} Feb 19 08:13:50 crc kubenswrapper[5023]: I0219 08:13:50.163025 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tsc67" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.084611 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sffgs" event={"ID":"51b4e594-f586-4108-ad83-8beb7cba09ca","Type":"ContainerStarted","Data":"5022499ec23c707fe01967309c5fa2c55c15d8b634ccbf63dce94d31ffb9019d"} Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.085391 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.114680 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sffgs" podStartSLOduration=5.547536847 podStartE2EDuration="13.11464357s" podCreationTimestamp="2026-02-19 08:13:38 +0000 UTC" firstStartedPulling="2026-02-19 08:13:38.67983171 +0000 UTC m=+776.336950658" lastFinishedPulling="2026-02-19 08:13:46.246938423 +0000 UTC m=+783.904057381" observedRunningTime="2026-02-19 08:13:51.105362583 +0000 UTC m=+788.762481541" watchObservedRunningTime="2026-02-19 08:13:51.11464357 +0000 UTC m=+788.771762528" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.783250 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc"] Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.784742 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.787463 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.811087 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc"] Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.886472 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.886824 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.887116 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl2s\" (UniqueName: \"kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.988728 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.989013 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpl2s\" (UniqueName: \"kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.989478 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.989257 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:51 crc kubenswrapper[5023]: I0219 08:13:51.989862 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:52 crc kubenswrapper[5023]: I0219 08:13:52.015271 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpl2s\" (UniqueName: \"kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:52 crc kubenswrapper[5023]: I0219 08:13:52.110339 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:13:52 crc kubenswrapper[5023]: I0219 08:13:52.549141 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc"] Feb 19 08:13:53 crc kubenswrapper[5023]: I0219 08:13:53.098608 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerID="82c3b4163823d0bc3fc80cd15c9757d1cf283203cbe839ca1a7515e0fc3e81b5" exitCode=0 Feb 19 08:13:53 crc kubenswrapper[5023]: I0219 08:13:53.098682 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" event={"ID":"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191","Type":"ContainerDied","Data":"82c3b4163823d0bc3fc80cd15c9757d1cf283203cbe839ca1a7515e0fc3e81b5"} Feb 19 08:13:53 crc kubenswrapper[5023]: I0219 08:13:53.098717 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" event={"ID":"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191","Type":"ContainerStarted","Data":"e7346653e6d9f711dc3b97f5068bba6da6032b8d818990d28b02dc7775ad2bd5"} Feb 19 08:13:53 crc kubenswrapper[5023]: I0219 08:13:53.558650 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:53 crc kubenswrapper[5023]: I0219 08:13:53.609746 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.138147 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.139651 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.147058 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.217844 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.217904 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.218160 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfk6h\" (UniqueName: \"kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.319768 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfk6h\" (UniqueName: \"kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.319848 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.319880 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.320335 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.320743 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.338543 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfk6h\" (UniqueName: \"kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h\") pod \"redhat-operators-8kg88\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:54 crc kubenswrapper[5023]: I0219 08:13:54.453955 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:13:55 crc kubenswrapper[5023]: I0219 08:13:55.025966 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:13:55 crc kubenswrapper[5023]: W0219 08:13:55.036835 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33971fe5_284b_4be8_b01d_0955ecd98986.slice/crio-231c334a913b7e1aae6ec3ea5bf68eade7ab4b99aa51fb2a3906b35e3f409dd0 WatchSource:0}: Error finding container 231c334a913b7e1aae6ec3ea5bf68eade7ab4b99aa51fb2a3906b35e3f409dd0: Status 404 returned error can't find the container with id 231c334a913b7e1aae6ec3ea5bf68eade7ab4b99aa51fb2a3906b35e3f409dd0 Feb 19 08:13:55 crc kubenswrapper[5023]: I0219 08:13:55.112453 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerStarted","Data":"231c334a913b7e1aae6ec3ea5bf68eade7ab4b99aa51fb2a3906b35e3f409dd0"} Feb 19 08:13:56 crc kubenswrapper[5023]: I0219 08:13:56.126055 5023 generic.go:334] "Generic (PLEG): container finished" podID="33971fe5-284b-4be8-b01d-0955ecd98986" containerID="815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764" exitCode=0 Feb 19 08:13:56 crc kubenswrapper[5023]: I0219 08:13:56.126097 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerDied","Data":"815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764"} Feb 19 08:13:58 crc kubenswrapper[5023]: I0219 08:13:58.139829 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerID="5295b6abd598260b1aa1b68a38e0ae8e8d87ed90f3da6bdd77321bb52667f6c1" exitCode=0 Feb 19 08:13:58 crc kubenswrapper[5023]: I0219 08:13:58.139993 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" event={"ID":"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191","Type":"ContainerDied","Data":"5295b6abd598260b1aa1b68a38e0ae8e8d87ed90f3da6bdd77321bb52667f6c1"} Feb 19 08:13:58 crc kubenswrapper[5023]: I0219 08:13:58.547273 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fm5ph" Feb 19 08:13:58 crc kubenswrapper[5023]: I0219 08:13:58.672988 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-l6q57" Feb 19 08:13:59 crc kubenswrapper[5023]: I0219 08:13:59.160753 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerID="ddefbcfc18e1998f903bd269aad8237ffcd3cc9c663e278050c3517b3033d9f6" exitCode=0 Feb 19 08:13:59 crc kubenswrapper[5023]: I0219 08:13:59.160830 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" event={"ID":"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191","Type":"ContainerDied","Data":"ddefbcfc18e1998f903bd269aad8237ffcd3cc9c663e278050c3517b3033d9f6"} Feb 19 08:13:59 crc kubenswrapper[5023]: I0219 08:13:59.164981 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerStarted","Data":"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160"} Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.172403 5023 generic.go:334] "Generic (PLEG): container finished" podID="33971fe5-284b-4be8-b01d-0955ecd98986" containerID="e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160" exitCode=0 Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.172442 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerDied","Data":"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160"} Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.476671 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.518692 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpl2s\" (UniqueName: \"kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s\") pod \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.518737 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util\") pod \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.518783 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle\") pod \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\" (UID: \"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191\") " Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.519915 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle" (OuterVolumeSpecName: "bundle") pod "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" (UID: "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.525415 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s" (OuterVolumeSpecName: "kube-api-access-hpl2s") pod "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" (UID: "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191"). InnerVolumeSpecName "kube-api-access-hpl2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.536212 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util" (OuterVolumeSpecName: "util") pod "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" (UID: "f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.619604 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpl2s\" (UniqueName: \"kubernetes.io/projected/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-kube-api-access-hpl2s\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.619655 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:00 crc kubenswrapper[5023]: I0219 08:14:00.619667 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:01 crc kubenswrapper[5023]: I0219 08:14:01.184443 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerStarted","Data":"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259"} Feb 19 08:14:01 crc kubenswrapper[5023]: I0219 08:14:01.187839 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" event={"ID":"f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191","Type":"ContainerDied","Data":"e7346653e6d9f711dc3b97f5068bba6da6032b8d818990d28b02dc7775ad2bd5"} Feb 19 08:14:01 crc kubenswrapper[5023]: I0219 08:14:01.188018 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7346653e6d9f711dc3b97f5068bba6da6032b8d818990d28b02dc7775ad2bd5" Feb 19 08:14:01 crc kubenswrapper[5023]: I0219 08:14:01.188116 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc" Feb 19 08:14:01 crc kubenswrapper[5023]: I0219 08:14:01.207938 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kg88" podStartSLOduration=3.997493059 podStartE2EDuration="7.207920273s" podCreationTimestamp="2026-02-19 08:13:54 +0000 UTC" firstStartedPulling="2026-02-19 08:13:57.35052471 +0000 UTC m=+795.007643658" lastFinishedPulling="2026-02-19 08:14:00.560951924 +0000 UTC m=+798.218070872" observedRunningTime="2026-02-19 08:14:01.202178981 +0000 UTC m=+798.859297959" watchObservedRunningTime="2026-02-19 08:14:01.207920273 +0000 UTC m=+798.865039241" Feb 19 08:14:04 crc kubenswrapper[5023]: I0219 08:14:04.455199 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:04 crc kubenswrapper[5023]: I0219 08:14:04.455797 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.258234 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc"] Feb 19 08:14:05 crc kubenswrapper[5023]: E0219 08:14:05.258659 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="util" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.258685 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="util" Feb 19 08:14:05 crc kubenswrapper[5023]: E0219 08:14:05.258708 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="extract" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.258718 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="extract" Feb 19 08:14:05 crc kubenswrapper[5023]: E0219 08:14:05.258730 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="pull" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.258740 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="pull" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.258934 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191" containerName="extract" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.259649 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.263572 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.263608 5023 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9x4dx" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.264073 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.277925 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc"] Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.296232 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3d99336-4057-47c8-a7b2-6028d98dce8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.296319 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkqj\" (UniqueName: \"kubernetes.io/projected/c3d99336-4057-47c8-a7b2-6028d98dce8b-kube-api-access-vkkqj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.397930 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkqj\" (UniqueName: \"kubernetes.io/projected/c3d99336-4057-47c8-a7b2-6028d98dce8b-kube-api-access-vkkqj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.398021 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3d99336-4057-47c8-a7b2-6028d98dce8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.398471 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3d99336-4057-47c8-a7b2-6028d98dce8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.426068 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkqj\" (UniqueName: \"kubernetes.io/projected/c3d99336-4057-47c8-a7b2-6028d98dce8b-kube-api-access-vkkqj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-dm2dc\" (UID: \"c3d99336-4057-47c8-a7b2-6028d98dce8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.502940 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8kg88" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="registry-server" probeResult="failure" output=< Feb 19 08:14:05 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:14:05 crc kubenswrapper[5023]: > Feb 19 08:14:05 crc kubenswrapper[5023]: I0219 08:14:05.587498 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" Feb 19 08:14:06 crc kubenswrapper[5023]: I0219 08:14:06.185993 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc"] Feb 19 08:14:06 crc kubenswrapper[5023]: W0219 08:14:06.196981 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d99336_4057_47c8_a7b2_6028d98dce8b.slice/crio-3e277629455b7ac88f670cc42c2e5cda5ea41c456bf2ce33c96fd9d096b57904 WatchSource:0}: Error finding container 3e277629455b7ac88f670cc42c2e5cda5ea41c456bf2ce33c96fd9d096b57904: Status 404 returned error can't find the container with id 3e277629455b7ac88f670cc42c2e5cda5ea41c456bf2ce33c96fd9d096b57904 Feb 19 08:14:06 crc kubenswrapper[5023]: I0219 08:14:06.220661 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" event={"ID":"c3d99336-4057-47c8-a7b2-6028d98dce8b","Type":"ContainerStarted","Data":"3e277629455b7ac88f670cc42c2e5cda5ea41c456bf2ce33c96fd9d096b57904"} Feb 19 08:14:08 crc kubenswrapper[5023]: I0219 08:14:08.561071 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sffgs" Feb 19 08:14:09 crc kubenswrapper[5023]: I0219 08:14:09.291851 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" event={"ID":"c3d99336-4057-47c8-a7b2-6028d98dce8b","Type":"ContainerStarted","Data":"f73ea203745a81a2870320994b0823687236911aa78ae392153f8aabe9ca63a2"} Feb 19 08:14:09 crc kubenswrapper[5023]: I0219 08:14:09.323710 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-dm2dc" podStartSLOduration=1.717557998 podStartE2EDuration="4.323680764s" podCreationTimestamp="2026-02-19 08:14:05 +0000 UTC" firstStartedPulling="2026-02-19 08:14:06.200436268 +0000 UTC m=+803.857555236" lastFinishedPulling="2026-02-19 08:14:08.806559054 +0000 UTC m=+806.463678002" observedRunningTime="2026-02-19 08:14:09.318791454 +0000 UTC m=+806.975910422" watchObservedRunningTime="2026-02-19 08:14:09.323680764 +0000 UTC m=+806.980799772" Feb 19 08:14:11 crc kubenswrapper[5023]: I0219 08:14:11.873656 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:14:11 crc kubenswrapper[5023]: I0219 08:14:11.874040 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.827719 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8qvzr"] Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.828733 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.830850 5023 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dz7b6" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.830994 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.831217 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.835697 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8qvzr"] Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.850722 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.850760 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdtw\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-kube-api-access-fkdtw\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.951717 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.951761 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdtw\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-kube-api-access-fkdtw\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.970877 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:13 crc kubenswrapper[5023]: I0219 08:14:13.971771 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdtw\" (UniqueName: \"kubernetes.io/projected/b0363881-ec76-4013-8589-43bd4b142716-kube-api-access-fkdtw\") pod \"cert-manager-webhook-6888856db4-8qvzr\" (UID: \"b0363881-ec76-4013-8589-43bd4b142716\") " pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:14 crc kubenswrapper[5023]: I0219 08:14:14.151521 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:14 crc kubenswrapper[5023]: I0219 08:14:14.468181 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8qvzr"] Feb 19 08:14:14 crc kubenswrapper[5023]: I0219 08:14:14.586375 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:14 crc kubenswrapper[5023]: I0219 08:14:14.630757 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:14 crc kubenswrapper[5023]: I0219 08:14:14.822036 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:14:15 crc kubenswrapper[5023]: I0219 08:14:15.337825 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" event={"ID":"b0363881-ec76-4013-8589-43bd4b142716","Type":"ContainerStarted","Data":"9a097e3d8daa7fe673c2ea61973c3cb7346b8e3d7bd52b3a976deeee9118ab33"} Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.343720 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8kg88" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="registry-server" containerID="cri-o://dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259" gracePeriod=2 Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.718959 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.807794 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content\") pod \"33971fe5-284b-4be8-b01d-0955ecd98986\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.808049 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities\") pod \"33971fe5-284b-4be8-b01d-0955ecd98986\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.808243 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfk6h\" (UniqueName: \"kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h\") pod \"33971fe5-284b-4be8-b01d-0955ecd98986\" (UID: \"33971fe5-284b-4be8-b01d-0955ecd98986\") " Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.808898 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities" (OuterVolumeSpecName: "utilities") pod "33971fe5-284b-4be8-b01d-0955ecd98986" (UID: "33971fe5-284b-4be8-b01d-0955ecd98986"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.814012 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h" (OuterVolumeSpecName: "kube-api-access-dfk6h") pod "33971fe5-284b-4be8-b01d-0955ecd98986" (UID: "33971fe5-284b-4be8-b01d-0955ecd98986"). InnerVolumeSpecName "kube-api-access-dfk6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.909267 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.909303 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfk6h\" (UniqueName: \"kubernetes.io/projected/33971fe5-284b-4be8-b01d-0955ecd98986-kube-api-access-dfk6h\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:16 crc kubenswrapper[5023]: I0219 08:14:16.943970 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33971fe5-284b-4be8-b01d-0955ecd98986" (UID: "33971fe5-284b-4be8-b01d-0955ecd98986"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.010925 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33971fe5-284b-4be8-b01d-0955ecd98986-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.350455 5023 generic.go:334] "Generic (PLEG): container finished" podID="33971fe5-284b-4be8-b01d-0955ecd98986" containerID="dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259" exitCode=0 Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.350534 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kg88" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.350561 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerDied","Data":"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259"} Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.351354 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kg88" event={"ID":"33971fe5-284b-4be8-b01d-0955ecd98986","Type":"ContainerDied","Data":"231c334a913b7e1aae6ec3ea5bf68eade7ab4b99aa51fb2a3906b35e3f409dd0"} Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.351381 5023 scope.go:117] "RemoveContainer" containerID="dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.367690 5023 scope.go:117] "RemoveContainer" containerID="e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.381932 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.382774 5023 scope.go:117] "RemoveContainer" containerID="815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.388347 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8kg88"] Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.401037 5023 scope.go:117] "RemoveContainer" containerID="dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259" Feb 19 08:14:17 crc kubenswrapper[5023]: E0219 08:14:17.401478 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259\": container with ID starting with dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259 not found: ID does not exist" containerID="dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.401515 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259"} err="failed to get container status \"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259\": rpc error: code = NotFound desc = could not find container \"dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259\": container with ID starting with dddd6881cae551d8c973da9a29694800a6590c0def3b1fbfa6356b958690e259 not found: ID does not exist" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.401547 5023 scope.go:117] "RemoveContainer" containerID="e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160" Feb 19 08:14:17 crc kubenswrapper[5023]: E0219 08:14:17.402053 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160\": container with ID starting with e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160 not found: ID does not exist" containerID="e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.402091 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160"} err="failed to get container status \"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160\": rpc error: code = NotFound desc = could not find container \"e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160\": container with ID starting with e874e695746a967c2f46da4cfd423f943cffa9f9e7b9c340856c472c6a05f160 not found: ID does not exist" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.402122 5023 scope.go:117] "RemoveContainer" containerID="815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764" Feb 19 08:14:17 crc kubenswrapper[5023]: E0219 08:14:17.403315 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764\": container with ID starting with 815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764 not found: ID does not exist" containerID="815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.403450 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764"} err="failed to get container status \"815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764\": rpc error: code = NotFound desc = could not find container \"815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764\": container with ID starting with 815752d9a07a907c4d8c50540b2f80f474215d851ca7752c5d73c1f301c6e764 not found: ID does not exist" Feb 19 08:14:17 crc kubenswrapper[5023]: I0219 08:14:17.483744 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" path="/var/lib/kubelet/pods/33971fe5-284b-4be8-b01d-0955ecd98986/volumes" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.383533 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4qkq4"] Feb 19 08:14:19 crc kubenswrapper[5023]: E0219 08:14:19.384192 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="registry-server" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.384208 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="registry-server" Feb 19 08:14:19 crc kubenswrapper[5023]: E0219 08:14:19.384240 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="extract-utilities" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.384248 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="extract-utilities" Feb 19 08:14:19 crc kubenswrapper[5023]: E0219 08:14:19.384267 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="extract-content" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.384276 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="extract-content" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.384449 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="33971fe5-284b-4be8-b01d-0955ecd98986" containerName="registry-server" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.384996 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.386540 5023 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4hzcl" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.397519 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4qkq4"] Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.443086 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.443212 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclp2\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-kube-api-access-kclp2\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.544775 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.545119 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclp2\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-kube-api-access-kclp2\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.577717 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclp2\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-kube-api-access-kclp2\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.577786 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d8e36c4-29f0-4acb-b3c2-8fa44738751a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4qkq4\" (UID: \"9d8e36c4-29f0-4acb-b3c2-8fa44738751a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:19 crc kubenswrapper[5023]: I0219 08:14:19.712414 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" Feb 19 08:14:20 crc kubenswrapper[5023]: I0219 08:14:20.264890 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4qkq4"] Feb 19 08:14:20 crc kubenswrapper[5023]: I0219 08:14:20.374196 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" event={"ID":"9d8e36c4-29f0-4acb-b3c2-8fa44738751a","Type":"ContainerStarted","Data":"d1b39fb53ab5b176d3ad3da7000710d233a3afda2368722015e0c4c64155725d"} Feb 19 08:14:24 crc kubenswrapper[5023]: I0219 08:14:24.404450 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" event={"ID":"9d8e36c4-29f0-4acb-b3c2-8fa44738751a","Type":"ContainerStarted","Data":"3f0e16ce54216d0128d94c6f07d08e31136edd7acf0740e420ec554b53b2b2a5"} Feb 19 08:14:24 crc kubenswrapper[5023]: I0219 08:14:24.407777 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" event={"ID":"b0363881-ec76-4013-8589-43bd4b142716","Type":"ContainerStarted","Data":"f864117b42dd58c09c99dccb3a15e79fad945d571c80107eef2b118178de692c"} Feb 19 08:14:24 crc kubenswrapper[5023]: I0219 08:14:24.408136 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:24 crc kubenswrapper[5023]: I0219 08:14:24.423466 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-4qkq4" podStartSLOduration=2.049691511 podStartE2EDuration="5.423448944s" podCreationTimestamp="2026-02-19 08:14:19 +0000 UTC" firstStartedPulling="2026-02-19 08:14:20.277089273 +0000 UTC m=+817.934208261" lastFinishedPulling="2026-02-19 08:14:23.650846746 +0000 UTC m=+821.307965694" observedRunningTime="2026-02-19 08:14:24.422690804 +0000 UTC m=+822.079809752" watchObservedRunningTime="2026-02-19 08:14:24.423448944 +0000 UTC m=+822.080567892" Feb 19 08:14:24 crc kubenswrapper[5023]: I0219 08:14:24.461188 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" podStartSLOduration=2.2856031359999998 podStartE2EDuration="11.461162926s" podCreationTimestamp="2026-02-19 08:14:13 +0000 UTC" firstStartedPulling="2026-02-19 08:14:14.49011969 +0000 UTC m=+812.147238638" lastFinishedPulling="2026-02-19 08:14:23.66567948 +0000 UTC m=+821.322798428" observedRunningTime="2026-02-19 08:14:24.446926278 +0000 UTC m=+822.104045226" watchObservedRunningTime="2026-02-19 08:14:24.461162926 +0000 UTC m=+822.118281864" Feb 19 08:14:29 crc kubenswrapper[5023]: I0219 08:14:29.156693 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8qvzr" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.714360 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-7sjkx"] Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.716541 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.725105 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-7sjkx"] Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.727657 5023 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t4t2z" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.837144 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktf7\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-kube-api-access-vktf7\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.837476 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-bound-sa-token\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.938714 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-bound-sa-token\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.938789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktf7\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-kube-api-access-vktf7\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.957585 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktf7\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-kube-api-access-vktf7\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:32 crc kubenswrapper[5023]: I0219 08:14:32.960344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872749de-64b7-4a74-a8d9-70bb7d41b496-bound-sa-token\") pod \"cert-manager-545d4d4674-7sjkx\" (UID: \"872749de-64b7-4a74-a8d9-70bb7d41b496\") " pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:33 crc kubenswrapper[5023]: I0219 08:14:33.036377 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-7sjkx" Feb 19 08:14:33 crc kubenswrapper[5023]: I0219 08:14:33.500141 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-7sjkx"] Feb 19 08:14:33 crc kubenswrapper[5023]: I0219 08:14:33.944160 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:33 crc kubenswrapper[5023]: I0219 08:14:33.952846 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:33 crc kubenswrapper[5023]: I0219 08:14:33.960202 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.051948 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.052003 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4j6l\" (UniqueName: \"kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.052155 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.153670 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.153778 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.153800 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4j6l\" (UniqueName: \"kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.154116 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.154235 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.174409 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4j6l\" (UniqueName: \"kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l\") pod \"redhat-marketplace-5tv8q\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.291097 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.481206 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-7sjkx" event={"ID":"872749de-64b7-4a74-a8d9-70bb7d41b496","Type":"ContainerStarted","Data":"00381f40e5d920844b707ee5f73f677093a7870c3437e148a7f4fd68eb1cdd42"} Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.481517 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-7sjkx" event={"ID":"872749de-64b7-4a74-a8d9-70bb7d41b496","Type":"ContainerStarted","Data":"da99a4de3936361daa5ce6a69d9184e46a0af5708e8ca710cd937612180e6ba8"} Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.498709 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-7sjkx" podStartSLOduration=2.498690049 podStartE2EDuration="2.498690049s" podCreationTimestamp="2026-02-19 08:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:14:34.494796135 +0000 UTC m=+832.151915083" watchObservedRunningTime="2026-02-19 08:14:34.498690049 +0000 UTC m=+832.155808997" Feb 19 08:14:34 crc kubenswrapper[5023]: I0219 08:14:34.718721 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:34 crc kubenswrapper[5023]: W0219 08:14:34.721988 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode904668a_9908_47b0_9fac_c5c6c580376f.slice/crio-bd6b0ae8c6650971f47a82b99e7965bba962c82b031a00de08648bd5d335fda4 WatchSource:0}: Error finding container bd6b0ae8c6650971f47a82b99e7965bba962c82b031a00de08648bd5d335fda4: Status 404 returned error can't find the container with id bd6b0ae8c6650971f47a82b99e7965bba962c82b031a00de08648bd5d335fda4 Feb 19 08:14:35 crc kubenswrapper[5023]: I0219 08:14:35.497575 5023 generic.go:334] "Generic (PLEG): container finished" podID="e904668a-9908-47b0-9fac-c5c6c580376f" containerID="fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02" exitCode=0 Feb 19 08:14:35 crc kubenswrapper[5023]: I0219 08:14:35.498109 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerDied","Data":"fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02"} Feb 19 08:14:35 crc kubenswrapper[5023]: I0219 08:14:35.498243 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerStarted","Data":"bd6b0ae8c6650971f47a82b99e7965bba962c82b031a00de08648bd5d335fda4"} Feb 19 08:14:36 crc kubenswrapper[5023]: I0219 08:14:36.514569 5023 generic.go:334] "Generic (PLEG): container finished" podID="e904668a-9908-47b0-9fac-c5c6c580376f" containerID="3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c" exitCode=0 Feb 19 08:14:36 crc kubenswrapper[5023]: I0219 08:14:36.514668 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerDied","Data":"3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c"} Feb 19 08:14:37 crc kubenswrapper[5023]: I0219 08:14:37.524067 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerStarted","Data":"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07"} Feb 19 08:14:37 crc kubenswrapper[5023]: I0219 08:14:37.541982 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5tv8q" podStartSLOduration=3.06323708 podStartE2EDuration="4.54195585s" podCreationTimestamp="2026-02-19 08:14:33 +0000 UTC" firstStartedPulling="2026-02-19 08:14:35.501614417 +0000 UTC m=+833.158733395" lastFinishedPulling="2026-02-19 08:14:36.980333217 +0000 UTC m=+834.637452165" observedRunningTime="2026-02-19 08:14:37.540382698 +0000 UTC m=+835.197501666" watchObservedRunningTime="2026-02-19 08:14:37.54195585 +0000 UTC m=+835.199074818" Feb 19 08:14:41 crc kubenswrapper[5023]: I0219 08:14:41.871169 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:14:41 crc kubenswrapper[5023]: I0219 08:14:41.872400 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:14:41 crc kubenswrapper[5023]: I0219 08:14:41.872503 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:14:41 crc kubenswrapper[5023]: I0219 08:14:41.873968 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:14:41 crc kubenswrapper[5023]: I0219 08:14:41.874113 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001" gracePeriod=600 Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.566866 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001" exitCode=0 Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.567030 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001"} Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.567143 5023 scope.go:117] "RemoveContainer" containerID="032bc002ef9d6211ff37891b971af058f31e755ae2b8ee7a564c359cdfecd43d" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.615946 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.616763 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.619019 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-g82dm" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.619318 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.619552 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.628264 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.675768 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqtq6\" (UniqueName: \"kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6\") pod \"openstack-operator-index-gxjzt\" (UID: \"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12\") " pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.776910 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqtq6\" (UniqueName: \"kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6\") pod \"openstack-operator-index-gxjzt\" (UID: \"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12\") " pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.805805 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqtq6\" (UniqueName: \"kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6\") pod \"openstack-operator-index-gxjzt\" (UID: \"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12\") " pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:42 crc kubenswrapper[5023]: I0219 08:14:42.947061 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:43 crc kubenswrapper[5023]: I0219 08:14:43.423028 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:43 crc kubenswrapper[5023]: W0219 08:14:43.438951 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fdeee1_38cd_41d0_b5cb_7b84ef22dd12.slice/crio-47684f709a9b97632720901f17f6a3c4482cf5cde1eddafe60c2820de1c34327 WatchSource:0}: Error finding container 47684f709a9b97632720901f17f6a3c4482cf5cde1eddafe60c2820de1c34327: Status 404 returned error can't find the container with id 47684f709a9b97632720901f17f6a3c4482cf5cde1eddafe60c2820de1c34327 Feb 19 08:14:43 crc kubenswrapper[5023]: I0219 08:14:43.579053 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f"} Feb 19 08:14:43 crc kubenswrapper[5023]: I0219 08:14:43.580604 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxjzt" event={"ID":"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12","Type":"ContainerStarted","Data":"47684f709a9b97632720901f17f6a3c4482cf5cde1eddafe60c2820de1c34327"} Feb 19 08:14:44 crc kubenswrapper[5023]: I0219 08:14:44.291650 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:44 crc kubenswrapper[5023]: I0219 08:14:44.292004 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:44 crc kubenswrapper[5023]: I0219 08:14:44.355152 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:44 crc kubenswrapper[5023]: I0219 08:14:44.640488 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.189002 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.613662 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxjzt" event={"ID":"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12","Type":"ContainerStarted","Data":"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849"} Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.635781 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gxjzt" podStartSLOduration=2.229159172 podStartE2EDuration="5.635762201s" podCreationTimestamp="2026-02-19 08:14:42 +0000 UTC" firstStartedPulling="2026-02-19 08:14:43.441890153 +0000 UTC m=+841.099009101" lastFinishedPulling="2026-02-19 08:14:46.848493192 +0000 UTC m=+844.505612130" observedRunningTime="2026-02-19 08:14:47.627788321 +0000 UTC m=+845.284907289" watchObservedRunningTime="2026-02-19 08:14:47.635762201 +0000 UTC m=+845.292881139" Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.989941 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7v4rv"] Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.990953 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:47 crc kubenswrapper[5023]: I0219 08:14:47.998510 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7v4rv"] Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.015174 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthmx\" (UniqueName: \"kubernetes.io/projected/ca2cee23-359d-4810-8ded-0ce03a1c4add-kube-api-access-wthmx\") pod \"openstack-operator-index-7v4rv\" (UID: \"ca2cee23-359d-4810-8ded-0ce03a1c4add\") " pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.116410 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthmx\" (UniqueName: \"kubernetes.io/projected/ca2cee23-359d-4810-8ded-0ce03a1c4add-kube-api-access-wthmx\") pod \"openstack-operator-index-7v4rv\" (UID: \"ca2cee23-359d-4810-8ded-0ce03a1c4add\") " pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.142665 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthmx\" (UniqueName: \"kubernetes.io/projected/ca2cee23-359d-4810-8ded-0ce03a1c4add-kube-api-access-wthmx\") pod \"openstack-operator-index-7v4rv\" (UID: \"ca2cee23-359d-4810-8ded-0ce03a1c4add\") " pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.316017 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.619052 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gxjzt" podUID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" containerName="registry-server" containerID="cri-o://60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849" gracePeriod=2 Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.756572 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7v4rv"] Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.953840 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.954808 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqtq6\" (UniqueName: \"kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6\") pod \"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12\" (UID: \"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12\") " Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.959113 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6" (OuterVolumeSpecName: "kube-api-access-tqtq6") pod "48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" (UID: "48fdeee1-38cd-41d0-b5cb-7b84ef22dd12"). InnerVolumeSpecName "kube-api-access-tqtq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.986093 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:48 crc kubenswrapper[5023]: I0219 08:14:48.986321 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5tv8q" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="registry-server" containerID="cri-o://0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07" gracePeriod=2 Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.055922 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqtq6\" (UniqueName: \"kubernetes.io/projected/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12-kube-api-access-tqtq6\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.354482 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.472884 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content\") pod \"e904668a-9908-47b0-9fac-c5c6c580376f\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.472946 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4j6l\" (UniqueName: \"kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l\") pod \"e904668a-9908-47b0-9fac-c5c6c580376f\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.473064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities\") pod \"e904668a-9908-47b0-9fac-c5c6c580376f\" (UID: \"e904668a-9908-47b0-9fac-c5c6c580376f\") " Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.474085 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities" (OuterVolumeSpecName: "utilities") pod "e904668a-9908-47b0-9fac-c5c6c580376f" (UID: "e904668a-9908-47b0-9fac-c5c6c580376f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.481759 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l" (OuterVolumeSpecName: "kube-api-access-r4j6l") pod "e904668a-9908-47b0-9fac-c5c6c580376f" (UID: "e904668a-9908-47b0-9fac-c5c6c580376f"). InnerVolumeSpecName "kube-api-access-r4j6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.494739 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e904668a-9908-47b0-9fac-c5c6c580376f" (UID: "e904668a-9908-47b0-9fac-c5c6c580376f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.574729 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.574758 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4j6l\" (UniqueName: \"kubernetes.io/projected/e904668a-9908-47b0-9fac-c5c6c580376f-kube-api-access-r4j6l\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.574770 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e904668a-9908-47b0-9fac-c5c6c580376f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.626707 5023 generic.go:334] "Generic (PLEG): container finished" podID="e904668a-9908-47b0-9fac-c5c6c580376f" containerID="0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07" exitCode=0 Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.626768 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tv8q" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.626776 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerDied","Data":"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.626816 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tv8q" event={"ID":"e904668a-9908-47b0-9fac-c5c6c580376f","Type":"ContainerDied","Data":"bd6b0ae8c6650971f47a82b99e7965bba962c82b031a00de08648bd5d335fda4"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.626834 5023 scope.go:117] "RemoveContainer" containerID="0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.628436 5023 generic.go:334] "Generic (PLEG): container finished" podID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" containerID="60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849" exitCode=0 Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.628504 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxjzt" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.628496 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxjzt" event={"ID":"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12","Type":"ContainerDied","Data":"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.628568 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxjzt" event={"ID":"48fdeee1-38cd-41d0-b5cb-7b84ef22dd12","Type":"ContainerDied","Data":"47684f709a9b97632720901f17f6a3c4482cf5cde1eddafe60c2820de1c34327"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.629911 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v4rv" event={"ID":"ca2cee23-359d-4810-8ded-0ce03a1c4add","Type":"ContainerStarted","Data":"555daa8a20ccebc92e9d1a6ba1fc216d1642866605c005da693d1fb81e5476a7"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.629943 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v4rv" event={"ID":"ca2cee23-359d-4810-8ded-0ce03a1c4add","Type":"ContainerStarted","Data":"a1132aaf59b0d9ab6998623709db5c4bbfcb0fe3f673768386e658c8ca4e2339"} Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.650562 5023 scope.go:117] "RemoveContainer" containerID="3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.657833 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7v4rv" podStartSLOduration=2.59002532 podStartE2EDuration="2.657810957s" podCreationTimestamp="2026-02-19 08:14:47 +0000 UTC" firstStartedPulling="2026-02-19 08:14:48.770655824 +0000 UTC m=+846.427774802" lastFinishedPulling="2026-02-19 08:14:48.838441491 +0000 UTC m=+846.495560439" observedRunningTime="2026-02-19 08:14:49.649746805 +0000 UTC m=+847.306865763" watchObservedRunningTime="2026-02-19 08:14:49.657810957 +0000 UTC m=+847.314929905" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.672068 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.681774 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gxjzt"] Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.686447 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.686486 5023 scope.go:117] "RemoveContainer" containerID="fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.704565 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tv8q"] Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.717793 5023 scope.go:117] "RemoveContainer" containerID="0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07" Feb 19 08:14:49 crc kubenswrapper[5023]: E0219 08:14:49.718209 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07\": container with ID starting with 0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07 not found: ID does not exist" containerID="0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718240 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07"} err="failed to get container status \"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07\": rpc error: code = NotFound desc = could not find container \"0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07\": container with ID starting with 0ccbfbe25057803b94fe24e7e53d0c4c0154a81b178d3c2e66a3550e9acccd07 not found: ID does not exist" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718263 5023 scope.go:117] "RemoveContainer" containerID="3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c" Feb 19 08:14:49 crc kubenswrapper[5023]: E0219 08:14:49.718476 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c\": container with ID starting with 3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c not found: ID does not exist" containerID="3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718515 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c"} err="failed to get container status \"3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c\": rpc error: code = NotFound desc = could not find container \"3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c\": container with ID starting with 3c74a5086ff0ec06d6928757d138ad35ba16ce51d11ea472933e0e98d245386c not found: ID does not exist" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718529 5023 scope.go:117] "RemoveContainer" containerID="fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02" Feb 19 08:14:49 crc kubenswrapper[5023]: E0219 08:14:49.718911 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02\": container with ID starting with fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02 not found: ID does not exist" containerID="fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718929 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02"} err="failed to get container status \"fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02\": rpc error: code = NotFound desc = could not find container \"fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02\": container with ID starting with fbc96cac3d76067968f546c8f85ca6a9e38abc1f11e3e1fdcd46889781e87a02 not found: ID does not exist" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.718942 5023 scope.go:117] "RemoveContainer" containerID="60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.733326 5023 scope.go:117] "RemoveContainer" containerID="60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849" Feb 19 08:14:49 crc kubenswrapper[5023]: E0219 08:14:49.733753 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849\": container with ID starting with 60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849 not found: ID does not exist" containerID="60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849" Feb 19 08:14:49 crc kubenswrapper[5023]: I0219 08:14:49.733804 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849"} err="failed to get container status \"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849\": rpc error: code = NotFound desc = could not find container \"60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849\": container with ID starting with 60713276063f8d2a26459c405552c4e83bc49e02f971eb88160366c4f5b20849 not found: ID does not exist" Feb 19 08:14:51 crc kubenswrapper[5023]: I0219 08:14:51.486997 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" path="/var/lib/kubelet/pods/48fdeee1-38cd-41d0-b5cb-7b84ef22dd12/volumes" Feb 19 08:14:51 crc kubenswrapper[5023]: I0219 08:14:51.487868 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" path="/var/lib/kubelet/pods/e904668a-9908-47b0-9fac-c5c6c580376f/volumes" Feb 19 08:14:58 crc kubenswrapper[5023]: I0219 08:14:58.317338 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:58 crc kubenswrapper[5023]: I0219 08:14:58.318236 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:58 crc kubenswrapper[5023]: I0219 08:14:58.362113 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:14:58 crc kubenswrapper[5023]: I0219 08:14:58.733036 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7v4rv" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147296 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265"] Feb 19 08:15:00 crc kubenswrapper[5023]: E0219 08:15:00.147538 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="extract-content" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147549 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="extract-content" Feb 19 08:15:00 crc kubenswrapper[5023]: E0219 08:15:00.147561 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147568 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: E0219 08:15:00.147582 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="extract-utilities" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147588 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="extract-utilities" Feb 19 08:15:00 crc kubenswrapper[5023]: E0219 08:15:00.147597 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147643 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147748 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fdeee1-38cd-41d0-b5cb-7b84ef22dd12" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.147758 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e904668a-9908-47b0-9fac-c5c6c580376f" containerName="registry-server" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.148152 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.150388 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.151066 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.157875 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265"] Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.232981 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m"] Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.233706 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.233751 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75fp\" (UniqueName: \"kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.233785 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.236128 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.238815 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wlrcz" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.243332 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m"] Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334505 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334555 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwrwq\" (UniqueName: \"kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334577 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334608 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75fp\" (UniqueName: \"kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.334932 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.335588 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.343288 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.350213 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75fp\" (UniqueName: \"kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp\") pod \"collect-profiles-29524815-pq265\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.435369 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwrwq\" (UniqueName: \"kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.435448 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.435948 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.436026 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.436280 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.461537 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwrwq\" (UniqueName: \"kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq\") pod \"8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.466482 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.562101 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.682710 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265"] Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.717375 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" event={"ID":"5923158e-5054-42aa-8301-f4547dbf7c20","Type":"ContainerStarted","Data":"bd8484d1702866c4020902d428d46e2fc462fc72d141871f336c9fc193b6560d"} Feb 19 08:15:00 crc kubenswrapper[5023]: I0219 08:15:00.754160 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m"] Feb 19 08:15:00 crc kubenswrapper[5023]: W0219 08:15:00.773294 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb44022ae_c88d_4656_a82a_bb5cbd80226a.slice/crio-b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e WatchSource:0}: Error finding container b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e: Status 404 returned error can't find the container with id b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e Feb 19 08:15:01 crc kubenswrapper[5023]: I0219 08:15:01.726059 5023 generic.go:334] "Generic (PLEG): container finished" podID="5923158e-5054-42aa-8301-f4547dbf7c20" containerID="1cbf1d292ed00420b4e9eca851568b248c31464fc7122057aaf49c38c143732a" exitCode=0 Feb 19 08:15:01 crc kubenswrapper[5023]: I0219 08:15:01.726137 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" event={"ID":"5923158e-5054-42aa-8301-f4547dbf7c20","Type":"ContainerDied","Data":"1cbf1d292ed00420b4e9eca851568b248c31464fc7122057aaf49c38c143732a"} Feb 19 08:15:01 crc kubenswrapper[5023]: I0219 08:15:01.727587 5023 generic.go:334] "Generic (PLEG): container finished" podID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerID="7030f6aa7465693443d1eee98d192f8419d57473c8ff8a13e586e1eecf1333f5" exitCode=0 Feb 19 08:15:01 crc kubenswrapper[5023]: I0219 08:15:01.727635 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" event={"ID":"b44022ae-c88d-4656-a82a-bb5cbd80226a","Type":"ContainerDied","Data":"7030f6aa7465693443d1eee98d192f8419d57473c8ff8a13e586e1eecf1333f5"} Feb 19 08:15:01 crc kubenswrapper[5023]: I0219 08:15:01.727655 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" event={"ID":"b44022ae-c88d-4656-a82a-bb5cbd80226a","Type":"ContainerStarted","Data":"b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e"} Feb 19 08:15:02 crc kubenswrapper[5023]: I0219 08:15:02.738613 5023 generic.go:334] "Generic (PLEG): container finished" podID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerID="67236e33d263b25147105e96d341b97eb9b88c42b35e268d9475d31b7901a2ea" exitCode=0 Feb 19 08:15:02 crc kubenswrapper[5023]: I0219 08:15:02.738762 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" event={"ID":"b44022ae-c88d-4656-a82a-bb5cbd80226a","Type":"ContainerDied","Data":"67236e33d263b25147105e96d341b97eb9b88c42b35e268d9475d31b7901a2ea"} Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.044641 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.169633 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume\") pod \"5923158e-5054-42aa-8301-f4547dbf7c20\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.169976 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume\") pod \"5923158e-5054-42aa-8301-f4547dbf7c20\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.170020 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n75fp\" (UniqueName: \"kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp\") pod \"5923158e-5054-42aa-8301-f4547dbf7c20\" (UID: \"5923158e-5054-42aa-8301-f4547dbf7c20\") " Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.170480 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume" (OuterVolumeSpecName: "config-volume") pod "5923158e-5054-42aa-8301-f4547dbf7c20" (UID: "5923158e-5054-42aa-8301-f4547dbf7c20"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.175379 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5923158e-5054-42aa-8301-f4547dbf7c20" (UID: "5923158e-5054-42aa-8301-f4547dbf7c20"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.176377 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp" (OuterVolumeSpecName: "kube-api-access-n75fp") pod "5923158e-5054-42aa-8301-f4547dbf7c20" (UID: "5923158e-5054-42aa-8301-f4547dbf7c20"). InnerVolumeSpecName "kube-api-access-n75fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.271134 5023 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5923158e-5054-42aa-8301-f4547dbf7c20-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.271169 5023 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5923158e-5054-42aa-8301-f4547dbf7c20-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.271179 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n75fp\" (UniqueName: \"kubernetes.io/projected/5923158e-5054-42aa-8301-f4547dbf7c20-kube-api-access-n75fp\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.746281 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" event={"ID":"5923158e-5054-42aa-8301-f4547dbf7c20","Type":"ContainerDied","Data":"bd8484d1702866c4020902d428d46e2fc462fc72d141871f336c9fc193b6560d"} Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.746320 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524815-pq265" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.746323 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd8484d1702866c4020902d428d46e2fc462fc72d141871f336c9fc193b6560d" Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.748717 5023 generic.go:334] "Generic (PLEG): container finished" podID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerID="a08483f4cb7cc46fd19de720d9f9c0f74208398df79827ccaf6aa6aed4bfdab9" exitCode=0 Feb 19 08:15:03 crc kubenswrapper[5023]: I0219 08:15:03.748749 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" event={"ID":"b44022ae-c88d-4656-a82a-bb5cbd80226a","Type":"ContainerDied","Data":"a08483f4cb7cc46fd19de720d9f9c0f74208398df79827ccaf6aa6aed4bfdab9"} Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.067431 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.197509 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util\") pod \"b44022ae-c88d-4656-a82a-bb5cbd80226a\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.197595 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwrwq\" (UniqueName: \"kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq\") pod \"b44022ae-c88d-4656-a82a-bb5cbd80226a\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.197712 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle\") pod \"b44022ae-c88d-4656-a82a-bb5cbd80226a\" (UID: \"b44022ae-c88d-4656-a82a-bb5cbd80226a\") " Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.198524 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle" (OuterVolumeSpecName: "bundle") pod "b44022ae-c88d-4656-a82a-bb5cbd80226a" (UID: "b44022ae-c88d-4656-a82a-bb5cbd80226a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.202331 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq" (OuterVolumeSpecName: "kube-api-access-bwrwq") pod "b44022ae-c88d-4656-a82a-bb5cbd80226a" (UID: "b44022ae-c88d-4656-a82a-bb5cbd80226a"). InnerVolumeSpecName "kube-api-access-bwrwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.210978 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util" (OuterVolumeSpecName: "util") pod "b44022ae-c88d-4656-a82a-bb5cbd80226a" (UID: "b44022ae-c88d-4656-a82a-bb5cbd80226a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.299094 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.299125 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwrwq\" (UniqueName: \"kubernetes.io/projected/b44022ae-c88d-4656-a82a-bb5cbd80226a-kube-api-access-bwrwq\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.299134 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b44022ae-c88d-4656-a82a-bb5cbd80226a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.762074 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" event={"ID":"b44022ae-c88d-4656-a82a-bb5cbd80226a","Type":"ContainerDied","Data":"b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e"} Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.762119 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b20925b9cf3470f96f6ad6a62e500ea6dd484f9581956d55fc0bcb2e84c12c6e" Feb 19 08:15:05 crc kubenswrapper[5023]: I0219 08:15:05.762179 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.578848 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:15:08 crc kubenswrapper[5023]: E0219 08:15:08.579339 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="util" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579350 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="util" Feb 19 08:15:08 crc kubenswrapper[5023]: E0219 08:15:08.579365 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5923158e-5054-42aa-8301-f4547dbf7c20" containerName="collect-profiles" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579370 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5923158e-5054-42aa-8301-f4547dbf7c20" containerName="collect-profiles" Feb 19 08:15:08 crc kubenswrapper[5023]: E0219 08:15:08.579381 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="pull" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579387 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="pull" Feb 19 08:15:08 crc kubenswrapper[5023]: E0219 08:15:08.579394 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="extract" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579399 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="extract" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579531 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5923158e-5054-42aa-8301-f4547dbf7c20" containerName="collect-profiles" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579546 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44022ae-c88d-4656-a82a-bb5cbd80226a" containerName="extract" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.579945 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.583562 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-crvcg" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.597824 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.643996 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfnr\" (UniqueName: \"kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr\") pod \"openstack-operator-controller-init-bbb967fcc-6924r\" (UID: \"050217a8-a68f-46d3-bad5-aab926acbb4a\") " pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.745592 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfnr\" (UniqueName: \"kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr\") pod \"openstack-operator-controller-init-bbb967fcc-6924r\" (UID: \"050217a8-a68f-46d3-bad5-aab926acbb4a\") " pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.764090 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfnr\" (UniqueName: \"kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr\") pod \"openstack-operator-controller-init-bbb967fcc-6924r\" (UID: \"050217a8-a68f-46d3-bad5-aab926acbb4a\") " pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:08 crc kubenswrapper[5023]: I0219 08:15:08.896988 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:09 crc kubenswrapper[5023]: I0219 08:15:09.176536 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:15:09 crc kubenswrapper[5023]: W0219 08:15:09.180591 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050217a8_a68f_46d3_bad5_aab926acbb4a.slice/crio-2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8 WatchSource:0}: Error finding container 2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8: Status 404 returned error can't find the container with id 2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8 Feb 19 08:15:09 crc kubenswrapper[5023]: I0219 08:15:09.801173 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" event={"ID":"050217a8-a68f-46d3-bad5-aab926acbb4a","Type":"ContainerStarted","Data":"2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8"} Feb 19 08:15:13 crc kubenswrapper[5023]: I0219 08:15:13.845889 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" event={"ID":"050217a8-a68f-46d3-bad5-aab926acbb4a","Type":"ContainerStarted","Data":"3bfb08e07fb12b59401e60326f73f450324558b92c36187a92af5861612e46b4"} Feb 19 08:15:13 crc kubenswrapper[5023]: I0219 08:15:13.846664 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:13 crc kubenswrapper[5023]: I0219 08:15:13.874506 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" podStartSLOduration=1.4245220729999999 podStartE2EDuration="5.874484552s" podCreationTimestamp="2026-02-19 08:15:08 +0000 UTC" firstStartedPulling="2026-02-19 08:15:09.184472816 +0000 UTC m=+866.841591764" lastFinishedPulling="2026-02-19 08:15:13.634435295 +0000 UTC m=+871.291554243" observedRunningTime="2026-02-19 08:15:13.873325001 +0000 UTC m=+871.530443959" watchObservedRunningTime="2026-02-19 08:15:13.874484552 +0000 UTC m=+871.531603500" Feb 19 08:15:18 crc kubenswrapper[5023]: I0219 08:15:18.901462 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.568750 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.575763 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.587413 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xmcrn" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.607336 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.608746 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.608874 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.611673 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-g5rp5" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.624133 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.625056 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.629758 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-56fbx" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.640029 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.652030 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.664270 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.665537 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.666188 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.667129 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.671759 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.680783 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wjw77" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.680943 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-plkfd" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.683859 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.687321 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrd2\" (UniqueName: \"kubernetes.io/projected/9719932b-2c04-47a0-97b8-492d4a5d297c-kube-api-access-dzrd2\") pod \"cinder-operator-controller-manager-5d946d989d-jvqln\" (UID: \"9719932b-2c04-47a0-97b8-492d4a5d297c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.687388 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx764\" (UniqueName: \"kubernetes.io/projected/cdfff2ca-6dc1-4850-806d-7fb9195e276a-kube-api-access-mx764\") pod \"designate-operator-controller-manager-6d8bf5c495-ppgdp\" (UID: \"cdfff2ca-6dc1-4850-806d-7fb9195e276a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.687417 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7vj\" (UniqueName: \"kubernetes.io/projected/677afd79-73b0-45db-a513-6b77dfb09992-kube-api-access-8b7vj\") pod \"barbican-operator-controller-manager-868647ff47-5xq6x\" (UID: \"677afd79-73b0-45db-a513-6b77dfb09992\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.687460 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5hw\" (UniqueName: \"kubernetes.io/projected/05d6abf5-ddc2-460e-8b10-252292257fdd-kube-api-access-gc5hw\") pod \"glance-operator-controller-manager-77987464f4-hsz4t\" (UID: \"05d6abf5-ddc2-460e-8b10-252292257fdd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.687485 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ts5\" (UniqueName: \"kubernetes.io/projected/a396f869-bade-4ff1-9031-ac899d4d6ed2-kube-api-access-49ts5\") pod \"heat-operator-controller-manager-69f49c598c-s74tq\" (UID: \"a396f869-bade-4ff1-9031-ac899d4d6ed2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.710255 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.711560 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.713757 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zr2fq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.717141 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-txbbh"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.717959 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.722939 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.723621 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.723824 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2rjrq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.732185 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.733152 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.739828 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-txbbh"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.741067 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rrp6r" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.744707 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.763946 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.764947 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.768375 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hck2v" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.793527 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7vj\" (UniqueName: \"kubernetes.io/projected/677afd79-73b0-45db-a513-6b77dfb09992-kube-api-access-8b7vj\") pod \"barbican-operator-controller-manager-868647ff47-5xq6x\" (UID: \"677afd79-73b0-45db-a513-6b77dfb09992\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.793589 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5hw\" (UniqueName: \"kubernetes.io/projected/05d6abf5-ddc2-460e-8b10-252292257fdd-kube-api-access-gc5hw\") pod \"glance-operator-controller-manager-77987464f4-hsz4t\" (UID: \"05d6abf5-ddc2-460e-8b10-252292257fdd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.793618 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ts5\" (UniqueName: \"kubernetes.io/projected/a396f869-bade-4ff1-9031-ac899d4d6ed2-kube-api-access-49ts5\") pod \"heat-operator-controller-manager-69f49c598c-s74tq\" (UID: \"a396f869-bade-4ff1-9031-ac899d4d6ed2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.793683 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrd2\" (UniqueName: \"kubernetes.io/projected/9719932b-2c04-47a0-97b8-492d4a5d297c-kube-api-access-dzrd2\") pod \"cinder-operator-controller-manager-5d946d989d-jvqln\" (UID: \"9719932b-2c04-47a0-97b8-492d4a5d297c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.793708 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx764\" (UniqueName: \"kubernetes.io/projected/cdfff2ca-6dc1-4850-806d-7fb9195e276a-kube-api-access-mx764\") pod \"designate-operator-controller-manager-6d8bf5c495-ppgdp\" (UID: \"cdfff2ca-6dc1-4850-806d-7fb9195e276a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.820784 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.827597 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ts5\" (UniqueName: \"kubernetes.io/projected/a396f869-bade-4ff1-9031-ac899d4d6ed2-kube-api-access-49ts5\") pod \"heat-operator-controller-manager-69f49c598c-s74tq\" (UID: \"a396f869-bade-4ff1-9031-ac899d4d6ed2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.835458 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7vj\" (UniqueName: \"kubernetes.io/projected/677afd79-73b0-45db-a513-6b77dfb09992-kube-api-access-8b7vj\") pod \"barbican-operator-controller-manager-868647ff47-5xq6x\" (UID: \"677afd79-73b0-45db-a513-6b77dfb09992\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.836057 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5hw\" (UniqueName: \"kubernetes.io/projected/05d6abf5-ddc2-460e-8b10-252292257fdd-kube-api-access-gc5hw\") pod \"glance-operator-controller-manager-77987464f4-hsz4t\" (UID: \"05d6abf5-ddc2-460e-8b10-252292257fdd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.842064 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrd2\" (UniqueName: \"kubernetes.io/projected/9719932b-2c04-47a0-97b8-492d4a5d297c-kube-api-access-dzrd2\") pod \"cinder-operator-controller-manager-5d946d989d-jvqln\" (UID: \"9719932b-2c04-47a0-97b8-492d4a5d297c\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.847848 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.849152 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.857261 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx764\" (UniqueName: \"kubernetes.io/projected/cdfff2ca-6dc1-4850-806d-7fb9195e276a-kube-api-access-mx764\") pod \"designate-operator-controller-manager-6d8bf5c495-ppgdp\" (UID: \"cdfff2ca-6dc1-4850-806d-7fb9195e276a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.861453 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6fgh2" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.875626 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.892684 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.893893 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.895197 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.895254 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rwc\" (UniqueName: \"kubernetes.io/projected/b73d7256-9139-4cbd-b7a7-7b4b3852aafb-kube-api-access-29rwc\") pod \"ironic-operator-controller-manager-554564d7fc-wgs6h\" (UID: \"b73d7256-9139-4cbd-b7a7-7b4b3852aafb\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.895282 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmvb\" (UniqueName: \"kubernetes.io/projected/f96cd850-d719-444c-8015-fdffb335df27-kube-api-access-vxmvb\") pod \"horizon-operator-controller-manager-5b9b8895d5-lfj5q\" (UID: \"f96cd850-d719-444c-8015-fdffb335df27\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.895310 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2kmz\" (UniqueName: \"kubernetes.io/projected/e61f8f71-02fe-448d-a0ef-1d2290d558b1-kube-api-access-p2kmz\") pod \"keystone-operator-controller-manager-b4d948c87-58ml6\" (UID: \"e61f8f71-02fe-448d-a0ef-1d2290d558b1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.895343 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkrr\" (UniqueName: \"kubernetes.io/projected/61b3e902-e458-49b8-8924-fd607e116c1f-kube-api-access-7nkrr\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.899067 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tqkms" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.903602 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.904543 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.906424 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tc8q5" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.923830 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.941175 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.952000 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.967300 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.985513 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.994377 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v"] Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.995177 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996264 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2kmz\" (UniqueName: \"kubernetes.io/projected/e61f8f71-02fe-448d-a0ef-1d2290d558b1-kube-api-access-p2kmz\") pod \"keystone-operator-controller-manager-b4d948c87-58ml6\" (UID: \"e61f8f71-02fe-448d-a0ef-1d2290d558b1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996303 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkrr\" (UniqueName: \"kubernetes.io/projected/61b3e902-e458-49b8-8924-fd607e116c1f-kube-api-access-7nkrr\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996337 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzp9\" (UniqueName: \"kubernetes.io/projected/aa77cbbd-b043-472e-ba08-07c42e16d326-kube-api-access-mfzp9\") pod \"manila-operator-controller-manager-54f6768c69-9zksh\" (UID: \"aa77cbbd-b043-472e-ba08-07c42e16d326\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996463 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996495 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdrj\" (UniqueName: \"kubernetes.io/projected/8d91d728-e5b6-4f5e-81ad-158b96069d64-kube-api-access-2wdrj\") pod \"mariadb-operator-controller-manager-6994f66f48-m2bd5\" (UID: \"8d91d728-e5b6-4f5e-81ad-158b96069d64\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996527 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rwc\" (UniqueName: \"kubernetes.io/projected/b73d7256-9139-4cbd-b7a7-7b4b3852aafb-kube-api-access-29rwc\") pod \"ironic-operator-controller-manager-554564d7fc-wgs6h\" (UID: \"b73d7256-9139-4cbd-b7a7-7b4b3852aafb\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:15:38 crc kubenswrapper[5023]: I0219 08:15:38.996545 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxmvb\" (UniqueName: \"kubernetes.io/projected/f96cd850-d719-444c-8015-fdffb335df27-kube-api-access-vxmvb\") pod \"horizon-operator-controller-manager-5b9b8895d5-lfj5q\" (UID: \"f96cd850-d719-444c-8015-fdffb335df27\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:15:38 crc kubenswrapper[5023]: E0219 08:15:38.997057 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:38 crc kubenswrapper[5023]: E0219 08:15:38.997113 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:15:39.497094065 +0000 UTC m=+897.154213013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.004951 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.019893 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2c9xt" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.020555 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.036368 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.037579 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.044344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxmvb\" (UniqueName: \"kubernetes.io/projected/f96cd850-d719-444c-8015-fdffb335df27-kube-api-access-vxmvb\") pod \"horizon-operator-controller-manager-5b9b8895d5-lfj5q\" (UID: \"f96cd850-d719-444c-8015-fdffb335df27\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.046847 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zhs25" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.076561 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2kmz\" (UniqueName: \"kubernetes.io/projected/e61f8f71-02fe-448d-a0ef-1d2290d558b1-kube-api-access-p2kmz\") pod \"keystone-operator-controller-manager-b4d948c87-58ml6\" (UID: \"e61f8f71-02fe-448d-a0ef-1d2290d558b1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.080768 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkrr\" (UniqueName: \"kubernetes.io/projected/61b3e902-e458-49b8-8924-fd607e116c1f-kube-api-access-7nkrr\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.081884 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rwc\" (UniqueName: \"kubernetes.io/projected/b73d7256-9139-4cbd-b7a7-7b4b3852aafb-kube-api-access-29rwc\") pod \"ironic-operator-controller-manager-554564d7fc-wgs6h\" (UID: \"b73d7256-9139-4cbd-b7a7-7b4b3852aafb\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.093726 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.099856 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdrj\" (UniqueName: \"kubernetes.io/projected/8d91d728-e5b6-4f5e-81ad-158b96069d64-kube-api-access-2wdrj\") pod \"mariadb-operator-controller-manager-6994f66f48-m2bd5\" (UID: \"8d91d728-e5b6-4f5e-81ad-158b96069d64\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.099929 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwch\" (UniqueName: \"kubernetes.io/projected/e9e36838-6d27-4e7e-9619-e3cd7b304426-kube-api-access-pfwch\") pod \"nova-operator-controller-manager-567668f5cf-zwc8v\" (UID: \"e9e36838-6d27-4e7e-9619-e3cd7b304426\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.099968 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79w4s\" (UniqueName: \"kubernetes.io/projected/17f2a3cb-6233-4f7f-b530-fb662f1aba34-kube-api-access-79w4s\") pod \"neutron-operator-controller-manager-64ddbf8bb-9rxg5\" (UID: \"17f2a3cb-6233-4f7f-b530-fb662f1aba34\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.100071 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzp9\" (UniqueName: \"kubernetes.io/projected/aa77cbbd-b043-472e-ba08-07c42e16d326-kube-api-access-mfzp9\") pod \"manila-operator-controller-manager-54f6768c69-9zksh\" (UID: \"aa77cbbd-b043-472e-ba08-07c42e16d326\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.153256 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.152739 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzp9\" (UniqueName: \"kubernetes.io/projected/aa77cbbd-b043-472e-ba08-07c42e16d326-kube-api-access-mfzp9\") pod \"manila-operator-controller-manager-54f6768c69-9zksh\" (UID: \"aa77cbbd-b043-472e-ba08-07c42e16d326\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.154732 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdrj\" (UniqueName: \"kubernetes.io/projected/8d91d728-e5b6-4f5e-81ad-158b96069d64-kube-api-access-2wdrj\") pod \"mariadb-operator-controller-manager-6994f66f48-m2bd5\" (UID: \"8d91d728-e5b6-4f5e-81ad-158b96069d64\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.183463 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.203721 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.205057 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.208744 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzxt7\" (UniqueName: \"kubernetes.io/projected/486c209b-21d4-45cb-9b95-cb8d27df2ad1-kube-api-access-kzxt7\") pod \"octavia-operator-controller-manager-69f8888797-kjbpp\" (UID: \"486c209b-21d4-45cb-9b95-cb8d27df2ad1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.208829 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfwch\" (UniqueName: \"kubernetes.io/projected/e9e36838-6d27-4e7e-9619-e3cd7b304426-kube-api-access-pfwch\") pod \"nova-operator-controller-manager-567668f5cf-zwc8v\" (UID: \"e9e36838-6d27-4e7e-9619-e3cd7b304426\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.208872 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79w4s\" (UniqueName: \"kubernetes.io/projected/17f2a3cb-6233-4f7f-b530-fb662f1aba34-kube-api-access-79w4s\") pod \"neutron-operator-controller-manager-64ddbf8bb-9rxg5\" (UID: \"17f2a3cb-6233-4f7f-b530-fb662f1aba34\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.208902 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7l8hw" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.209073 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.220072 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.234043 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.240956 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.241501 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79w4s\" (UniqueName: \"kubernetes.io/projected/17f2a3cb-6233-4f7f-b530-fb662f1aba34-kube-api-access-79w4s\") pod \"neutron-operator-controller-manager-64ddbf8bb-9rxg5\" (UID: \"17f2a3cb-6233-4f7f-b530-fb662f1aba34\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.238608 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfwch\" (UniqueName: \"kubernetes.io/projected/e9e36838-6d27-4e7e-9619-e3cd7b304426-kube-api-access-pfwch\") pod \"nova-operator-controller-manager-567668f5cf-zwc8v\" (UID: \"e9e36838-6d27-4e7e-9619-e3cd7b304426\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.267805 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.276268 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.281238 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5jcww" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.310251 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzxt7\" (UniqueName: \"kubernetes.io/projected/486c209b-21d4-45cb-9b95-cb8d27df2ad1-kube-api-access-kzxt7\") pod \"octavia-operator-controller-manager-69f8888797-kjbpp\" (UID: \"486c209b-21d4-45cb-9b95-cb8d27df2ad1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.310324 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcvlf\" (UniqueName: \"kubernetes.io/projected/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-kube-api-access-zcvlf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.310401 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.327688 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.329210 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.334279 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.339274 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzxt7\" (UniqueName: \"kubernetes.io/projected/486c209b-21d4-45cb-9b95-cb8d27df2ad1-kube-api-access-kzxt7\") pod \"octavia-operator-controller-manager-69f8888797-kjbpp\" (UID: \"486c209b-21d4-45cb-9b95-cb8d27df2ad1\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.348052 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.349215 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.354264 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-898rr" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.363649 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.391830 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.391870 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.392957 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.393832 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.407113 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cllvl" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.416091 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.416495 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlq4\" (UniqueName: \"kubernetes.io/projected/314f00ab-6012-4663-b265-2df54d81511b-kube-api-access-5rlq4\") pod \"ovn-operator-controller-manager-d44cf6b75-dfkgq\" (UID: \"314f00ab-6012-4663-b265-2df54d81511b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.416545 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcvlf\" (UniqueName: \"kubernetes.io/projected/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-kube-api-access-zcvlf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.416978 5023 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.417018 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert podName:7fc6e4db-1bd8-42ff-a64e-c4f356f80806 nodeName:}" failed. No retries permitted until 2026-02-19 08:15:39.917004743 +0000 UTC m=+897.574123691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" (UID: "7fc6e4db-1bd8-42ff-a64e-c4f356f80806") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.426536 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.427014 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.447686 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.449148 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.458603 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcvlf\" (UniqueName: \"kubernetes.io/projected/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-kube-api-access-zcvlf\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.460051 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g46g5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.465996 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-shhzj"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.466986 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.471877 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4kzn5" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.475427 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.505771 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-shhzj"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.517699 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.517744 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlq4\" (UniqueName: \"kubernetes.io/projected/314f00ab-6012-4663-b265-2df54d81511b-kube-api-access-5rlq4\") pod \"ovn-operator-controller-manager-d44cf6b75-dfkgq\" (UID: \"314f00ab-6012-4663-b265-2df54d81511b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.517800 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgpj\" (UniqueName: \"kubernetes.io/projected/2d806bd1-886e-4643-a98e-856c74c803aa-kube-api-access-rwgpj\") pod \"swift-operator-controller-manager-68f46476f-9wcz4\" (UID: \"2d806bd1-886e-4643-a98e-856c74c803aa\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.517860 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pth8j\" (UniqueName: \"kubernetes.io/projected/d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d-kube-api-access-pth8j\") pod \"placement-operator-controller-manager-8497b45c89-jdlhp\" (UID: \"d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.522024 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.522858 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.522958 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:15:40.522915405 +0000 UTC m=+898.180034353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.523454 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.530615 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m9xjx" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.538186 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.557783 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlq4\" (UniqueName: \"kubernetes.io/projected/314f00ab-6012-4663-b265-2df54d81511b-kube-api-access-5rlq4\") pod \"ovn-operator-controller-manager-d44cf6b75-dfkgq\" (UID: \"314f00ab-6012-4663-b265-2df54d81511b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.579584 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.580578 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.585866 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.588168 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2fjqm" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.588374 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.588476 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619368 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619726 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqnkt\" (UniqueName: \"kubernetes.io/projected/b448df69-64f6-4ba5-9c1d-60d1ca582acb-kube-api-access-nqnkt\") pod \"telemetry-operator-controller-manager-7f45b4ff68-ks9rd\" (UID: \"b448df69-64f6-4ba5-9c1d-60d1ca582acb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619826 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wh9\" (UniqueName: \"kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9\") pod \"watcher-operator-controller-manager-5b6f75fc4-mhwht\" (UID: \"3a0054e7-bed9-4f62-a6d9-c460a32deeef\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619875 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwgpj\" (UniqueName: \"kubernetes.io/projected/2d806bd1-886e-4643-a98e-856c74c803aa-kube-api-access-rwgpj\") pod \"swift-operator-controller-manager-68f46476f-9wcz4\" (UID: \"2d806bd1-886e-4643-a98e-856c74c803aa\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619929 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjpp\" (UniqueName: \"kubernetes.io/projected/7b5a2508-a1ef-40f4-92c3-91aae50788ba-kube-api-access-zwjpp\") pod \"test-operator-controller-manager-7866795846-shhzj\" (UID: \"7b5a2508-a1ef-40f4-92c3-91aae50788ba\") " pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.619950 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pth8j\" (UniqueName: \"kubernetes.io/projected/d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d-kube-api-access-pth8j\") pod \"placement-operator-controller-manager-8497b45c89-jdlhp\" (UID: \"d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.640101 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.641395 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.644051 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwgpj\" (UniqueName: \"kubernetes.io/projected/2d806bd1-886e-4643-a98e-856c74c803aa-kube-api-access-rwgpj\") pod \"swift-operator-controller-manager-68f46476f-9wcz4\" (UID: \"2d806bd1-886e-4643-a98e-856c74c803aa\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.644532 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mr2vb" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.650222 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.667817 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pth8j\" (UniqueName: \"kubernetes.io/projected/d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d-kube-api-access-pth8j\") pod \"placement-operator-controller-manager-8497b45c89-jdlhp\" (UID: \"d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.676780 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746039 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6wh9\" (UniqueName: \"kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9\") pod \"watcher-operator-controller-manager-5b6f75fc4-mhwht\" (UID: \"3a0054e7-bed9-4f62-a6d9-c460a32deeef\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746146 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkhr\" (UniqueName: \"kubernetes.io/projected/6e8405b6-2fae-404e-87c3-635d94cc4376-kube-api-access-ldkhr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nsz2f\" (UID: \"6e8405b6-2fae-404e-87c3-635d94cc4376\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746248 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l4m\" (UniqueName: \"kubernetes.io/projected/0c7247ae-fc2e-42b0-8333-33093c37978e-kube-api-access-q4l4m\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746410 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjpp\" (UniqueName: \"kubernetes.io/projected/7b5a2508-a1ef-40f4-92c3-91aae50788ba-kube-api-access-zwjpp\") pod \"test-operator-controller-manager-7866795846-shhzj\" (UID: \"7b5a2508-a1ef-40f4-92c3-91aae50788ba\") " pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746523 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746571 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqnkt\" (UniqueName: \"kubernetes.io/projected/b448df69-64f6-4ba5-9c1d-60d1ca582acb-kube-api-access-nqnkt\") pod \"telemetry-operator-controller-manager-7f45b4ff68-ks9rd\" (UID: \"b448df69-64f6-4ba5-9c1d-60d1ca582acb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.746697 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.749799 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.779115 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqnkt\" (UniqueName: \"kubernetes.io/projected/b448df69-64f6-4ba5-9c1d-60d1ca582acb-kube-api-access-nqnkt\") pod \"telemetry-operator-controller-manager-7f45b4ff68-ks9rd\" (UID: \"b448df69-64f6-4ba5-9c1d-60d1ca582acb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.779158 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjpp\" (UniqueName: \"kubernetes.io/projected/7b5a2508-a1ef-40f4-92c3-91aae50788ba-kube-api-access-zwjpp\") pod \"test-operator-controller-manager-7866795846-shhzj\" (UID: \"7b5a2508-a1ef-40f4-92c3-91aae50788ba\") " pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.783448 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6wh9\" (UniqueName: \"kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9\") pod \"watcher-operator-controller-manager-5b6f75fc4-mhwht\" (UID: \"3a0054e7-bed9-4f62-a6d9-c460a32deeef\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.828752 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.840322 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.869585 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.869730 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkhr\" (UniqueName: \"kubernetes.io/projected/6e8405b6-2fae-404e-87c3-635d94cc4376-kube-api-access-ldkhr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nsz2f\" (UID: \"6e8405b6-2fae-404e-87c3-635d94cc4376\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.869765 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l4m\" (UniqueName: \"kubernetes.io/projected/0c7247ae-fc2e-42b0-8333-33093c37978e-kube-api-access-q4l4m\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.869885 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.870036 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.870090 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:40.370071195 +0000 UTC m=+898.027190153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.870128 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.870145 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:40.370139317 +0000 UTC m=+898.027258265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.870597 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.889736 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.897226 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.906892 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq"] Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.933754 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkhr\" (UniqueName: \"kubernetes.io/projected/6e8405b6-2fae-404e-87c3-635d94cc4376-kube-api-access-ldkhr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nsz2f\" (UID: \"6e8405b6-2fae-404e-87c3-635d94cc4376\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.939564 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l4m\" (UniqueName: \"kubernetes.io/projected/0c7247ae-fc2e-42b0-8333-33093c37978e-kube-api-access-q4l4m\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:39 crc kubenswrapper[5023]: I0219 08:15:39.972392 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.972667 5023 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:39 crc kubenswrapper[5023]: E0219 08:15:39.972716 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert podName:7fc6e4db-1bd8-42ff-a64e-c4f356f80806 nodeName:}" failed. No retries permitted until 2026-02-19 08:15:40.97270054 +0000 UTC m=+898.629819488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" (UID: "7fc6e4db-1bd8-42ff-a64e-c4f356f80806") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.020967 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.079169 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.209356 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" event={"ID":"677afd79-73b0-45db-a513-6b77dfb09992","Type":"ContainerStarted","Data":"fbff9f07fd6e277d8e298ba745a130ebe9b08eb27c709cb4e0ef01ec2930122e"} Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.225715 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" event={"ID":"a396f869-bade-4ff1-9031-ac899d4d6ed2","Type":"ContainerStarted","Data":"ebcca554f841a3e450a94878c040b7883d4c57fd27b2b48b129044d0e35cc998"} Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.238493 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" event={"ID":"9719932b-2c04-47a0-97b8-492d4a5d297c","Type":"ContainerStarted","Data":"80f892fc7b11bf3080f025b4598156426234d083ed2951b4be82e47f703b15b4"} Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.244665 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.245282 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" event={"ID":"cdfff2ca-6dc1-4850-806d-7fb9195e276a","Type":"ContainerStarted","Data":"17478a0a2d8c5033da24a8a5aaaed2abb647a9f6b39c53cb7d54eeadbb42c1ee"} Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.257267 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.264695 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t"] Feb 19 08:15:40 crc kubenswrapper[5023]: W0219 08:15:40.313100 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d91d728_e5b6_4f5e_81ad_158b96069d64.slice/crio-517ff636a899fd126cf2b691739ef93eef626c6c97b1afe58ee4f17b705d3bf2 WatchSource:0}: Error finding container 517ff636a899fd126cf2b691739ef93eef626c6c97b1afe58ee4f17b705d3bf2: Status 404 returned error can't find the container with id 517ff636a899fd126cf2b691739ef93eef626c6c97b1afe58ee4f17b705d3bf2 Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.372467 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.384263 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.384391 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.384525 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.384604 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:41.384581897 +0000 UTC m=+899.041700845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.384660 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.384745 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:41.384722 +0000 UTC m=+899.041841018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: W0219 08:15:40.453229 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb73d7256_9139_4cbd_b7a7_7b4b3852aafb.slice/crio-bdb97e8535e28804ff3bdfd03d69d22a3007df3d1073e3e0035c4d20335b9e9e WatchSource:0}: Error finding container bdb97e8535e28804ff3bdfd03d69d22a3007df3d1073e3e0035c4d20335b9e9e: Status 404 returned error can't find the container with id bdb97e8535e28804ff3bdfd03d69d22a3007df3d1073e3e0035c4d20335b9e9e Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.470232 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.494259 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.595795 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.596008 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.596082 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:15:42.596058461 +0000 UTC m=+900.253177399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.660452 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.676857 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.868165 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.879827 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v"] Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.889328 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4"] Feb 19 08:15:40 crc kubenswrapper[5023]: W0219 08:15:40.896611 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod314f00ab_6012_4663_b265_2df54d81511b.slice/crio-527963535c9a2bac1b48626fa0d238a7faa6a2e43f477beb2108252884bd2909 WatchSource:0}: Error finding container 527963535c9a2bac1b48626fa0d238a7faa6a2e43f477beb2108252884bd2909: Status 404 returned error can't find the container with id 527963535c9a2bac1b48626fa0d238a7faa6a2e43f477beb2108252884bd2909 Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.896789 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-shhzj"] Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.950537 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfwch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-zwc8v_openstack-operators(e9e36838-6d27-4e7e-9619-e3cd7b304426): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:15:40 crc kubenswrapper[5023]: E0219 08:15:40.952641 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podUID="e9e36838-6d27-4e7e-9619-e3cd7b304426" Feb 19 08:15:40 crc kubenswrapper[5023]: I0219 08:15:40.962903 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd"] Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.000356 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.000552 5023 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.000603 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert podName:7fc6e4db-1bd8-42ff-a64e-c4f356f80806 nodeName:}" failed. No retries permitted until 2026-02-19 08:15:43.000587353 +0000 UTC m=+900.657706301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" (UID: "7fc6e4db-1bd8-42ff-a64e-c4f356f80806") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.017000 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp"] Feb 19 08:15:41 crc kubenswrapper[5023]: W0219 08:15:41.022586 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb448df69_64f6_4ba5_9c1d_60d1ca582acb.slice/crio-d1aa3a517aaa99445606d9c1028f927839d4eaa0cc9acf4dd265f1a924170d2f WatchSource:0}: Error finding container d1aa3a517aaa99445606d9c1028f927839d4eaa0cc9acf4dd265f1a924170d2f: Status 404 returned error can't find the container with id d1aa3a517aaa99445606d9c1028f927839d4eaa0cc9acf4dd265f1a924170d2f Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.041855 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f"] Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.053080 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.057250 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pth8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-jdlhp_openstack-operators(d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.058810 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" podUID="d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d" Feb 19 08:15:41 crc kubenswrapper[5023]: W0219 08:15:41.064677 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a0054e7_bed9_4f62_a6d9_c460a32deeef.slice/crio-74939aeb93a22c864088ff8b560baa999a4d3debb71eaf4bdddd295c55234a0a WatchSource:0}: Error finding container 74939aeb93a22c864088ff8b560baa999a4d3debb71eaf4bdddd295c55234a0a: Status 404 returned error can't find the container with id 74939aeb93a22c864088ff8b560baa999a4d3debb71eaf4bdddd295c55234a0a Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.068871 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6wh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5b6f75fc4-mhwht_openstack-operators(3a0054e7-bed9-4f62-a6d9-c460a32deeef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.070135 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.082205 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldkhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nsz2f_openstack-operators(6e8405b6-2fae-404e-87c3-635d94cc4376): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.083415 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podUID="6e8405b6-2fae-404e-87c3-635d94cc4376" Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.277566 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" event={"ID":"8d91d728-e5b6-4f5e-81ad-158b96069d64","Type":"ContainerStarted","Data":"517ff636a899fd126cf2b691739ef93eef626c6c97b1afe58ee4f17b705d3bf2"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.279992 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" event={"ID":"17f2a3cb-6233-4f7f-b530-fb662f1aba34","Type":"ContainerStarted","Data":"5818aca8572ad18f01df2d5e381687f43172d8135dfb1061fd691988871981d7"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.281429 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" event={"ID":"b448df69-64f6-4ba5-9c1d-60d1ca582acb","Type":"ContainerStarted","Data":"d1aa3a517aaa99445606d9c1028f927839d4eaa0cc9acf4dd265f1a924170d2f"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.283289 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" event={"ID":"e9e36838-6d27-4e7e-9619-e3cd7b304426","Type":"ContainerStarted","Data":"90c00353e3f86e70e47a8b17b9e4dcac137bbc7e6dda3769c4f69e46e6758a97"} Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.285389 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podUID="e9e36838-6d27-4e7e-9619-e3cd7b304426" Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.290194 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" event={"ID":"3a0054e7-bed9-4f62-a6d9-c460a32deeef","Type":"ContainerStarted","Data":"74939aeb93a22c864088ff8b560baa999a4d3debb71eaf4bdddd295c55234a0a"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.292266 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" event={"ID":"d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d","Type":"ContainerStarted","Data":"d54e6f1214c6023b1790390e3c4baa2c2cebe0603cada5240d949c2f35d94bcc"} Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.292380 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.293305 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" podUID="d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d" Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.296841 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" event={"ID":"b73d7256-9139-4cbd-b7a7-7b4b3852aafb","Type":"ContainerStarted","Data":"bdb97e8535e28804ff3bdfd03d69d22a3007df3d1073e3e0035c4d20335b9e9e"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.305199 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" event={"ID":"05d6abf5-ddc2-460e-8b10-252292257fdd","Type":"ContainerStarted","Data":"27d9100d711801e8789bd8ba1dd84614784151172a7c0a8605bcd48205b21750"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.349730 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" event={"ID":"486c209b-21d4-45cb-9b95-cb8d27df2ad1","Type":"ContainerStarted","Data":"5a8cead48e9751a5eed9605b05f4046aecdaae04d655e058b411532df1084b11"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.369643 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" event={"ID":"aa77cbbd-b043-472e-ba08-07c42e16d326","Type":"ContainerStarted","Data":"bdfec3952633ed3256063addbca83ab2b5919611d914d7a6cc69b2ab043f2a28"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.370971 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" event={"ID":"314f00ab-6012-4663-b265-2df54d81511b","Type":"ContainerStarted","Data":"527963535c9a2bac1b48626fa0d238a7faa6a2e43f477beb2108252884bd2909"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.383748 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" event={"ID":"2d806bd1-886e-4643-a98e-856c74c803aa","Type":"ContainerStarted","Data":"05639f1f6371dfb268a0cb78d7e0e91699b028cca9fd249fa745eee002cd9cc5"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.393675 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" event={"ID":"f96cd850-d719-444c-8015-fdffb335df27","Type":"ContainerStarted","Data":"5876d49de1037262e8af51178d2a56f7229716629b973f79505c1b505d8f99a9"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.396506 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" event={"ID":"7b5a2508-a1ef-40f4-92c3-91aae50788ba","Type":"ContainerStarted","Data":"ff665c95f0ed869d4488b565d7b7a436b260a91e0c8a17a5c639050ac084a5e6"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.398042 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" event={"ID":"e61f8f71-02fe-448d-a0ef-1d2290d558b1","Type":"ContainerStarted","Data":"db8d2d5f418e30ba0ef18a26251c33c31aaf8b13f7b211fa106394baa1e4a96c"} Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.402789 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" event={"ID":"6e8405b6-2fae-404e-87c3-635d94cc4376","Type":"ContainerStarted","Data":"e8318883c15861eca8b64118901574d10d718af4ec90aa116255a140143fdee5"} Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.404352 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podUID="6e8405b6-2fae-404e-87c3-635d94cc4376" Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.428747 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:41 crc kubenswrapper[5023]: I0219 08:15:41.428882 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.429351 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.429466 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:43.429437886 +0000 UTC m=+901.086556834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.429936 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:41 crc kubenswrapper[5023]: E0219 08:15:41.429969 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:43.429957909 +0000 UTC m=+901.087076857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.418720 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podUID="6e8405b6-2fae-404e-87c3-635d94cc4376" Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.419382 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.420357 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podUID="e9e36838-6d27-4e7e-9619-e3cd7b304426" Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.420401 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" podUID="d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d" Feb 19 08:15:42 crc kubenswrapper[5023]: I0219 08:15:42.662146 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.662387 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:42 crc kubenswrapper[5023]: E0219 08:15:42.662440 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:15:46.662424945 +0000 UTC m=+904.319543893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: I0219 08:15:43.070966 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.071275 5023 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.071350 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert podName:7fc6e4db-1bd8-42ff-a64e-c4f356f80806 nodeName:}" failed. No retries permitted until 2026-02-19 08:15:47.071331032 +0000 UTC m=+904.728449970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" (UID: "7fc6e4db-1bd8-42ff-a64e-c4f356f80806") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: I0219 08:15:43.493728 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:43 crc kubenswrapper[5023]: I0219 08:15:43.498612 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.494014 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.514757 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:47.514720879 +0000 UTC m=+905.171839817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.498862 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:43 crc kubenswrapper[5023]: E0219 08:15:43.514907 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:47.514877543 +0000 UTC m=+905.171996491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.021068 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.023015 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.036241 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.038172 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.096196 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.124938 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182227 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4k2p\" (UniqueName: \"kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182277 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh6b\" (UniqueName: \"kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182340 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182383 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182451 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.182581 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283639 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4k2p\" (UniqueName: \"kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283676 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh6b\" (UniqueName: \"kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283704 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283729 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283760 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.283828 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.284455 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.285528 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.285828 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.286137 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.321002 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4k2p\" (UniqueName: \"kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p\") pod \"certified-operators-6m7ch\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.344406 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh6b\" (UniqueName: \"kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b\") pod \"community-operators-9stlh\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.353537 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:15:45 crc kubenswrapper[5023]: I0219 08:15:45.379512 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:15:46 crc kubenswrapper[5023]: I0219 08:15:46.745449 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:46 crc kubenswrapper[5023]: E0219 08:15:46.745773 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:46 crc kubenswrapper[5023]: E0219 08:15:46.746179 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:15:54.746153232 +0000 UTC m=+912.403272180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: I0219 08:15:47.157419 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.157670 5023 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.157775 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert podName:7fc6e4db-1bd8-42ff-a64e-c4f356f80806 nodeName:}" failed. No retries permitted until 2026-02-19 08:15:55.15774995 +0000 UTC m=+912.814868888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" (UID: "7fc6e4db-1bd8-42ff-a64e-c4f356f80806") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: I0219 08:15:47.563752 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:47 crc kubenswrapper[5023]: I0219 08:15:47.563941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.564039 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.564091 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.564158 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:55.564139632 +0000 UTC m=+913.221258600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:47 crc kubenswrapper[5023]: E0219 08:15:47.564212 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:15:55.564169383 +0000 UTC m=+913.221288371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:53 crc kubenswrapper[5023]: I0219 08:15:53.485475 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:15:54 crc kubenswrapper[5023]: I0219 08:15:54.792792 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:15:54 crc kubenswrapper[5023]: E0219 08:15:54.792949 5023 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:54 crc kubenswrapper[5023]: E0219 08:15:54.793006 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert podName:61b3e902-e458-49b8-8924-fd607e116c1f nodeName:}" failed. No retries permitted until 2026-02-19 08:16:10.792990405 +0000 UTC m=+928.450109353 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert") pod "infra-operator-controller-manager-79d975b745-txbbh" (UID: "61b3e902-e458-49b8-8924-fd607e116c1f") : secret "infra-operator-webhook-server-cert" not found Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.205695 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.218960 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fc6e4db-1bd8-42ff-a64e-c4f356f80806-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9\" (UID: \"7fc6e4db-1bd8-42ff-a64e-c4f356f80806\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.497262 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7l8hw" Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.504694 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.609404 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:55 crc kubenswrapper[5023]: I0219 08:15:55.609469 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:15:55 crc kubenswrapper[5023]: E0219 08:15:55.609559 5023 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 08:15:55 crc kubenswrapper[5023]: E0219 08:15:55.609596 5023 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 08:15:55 crc kubenswrapper[5023]: E0219 08:15:55.609642 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:16:11.609611438 +0000 UTC m=+929.266730376 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "metrics-server-cert" not found Feb 19 08:15:55 crc kubenswrapper[5023]: E0219 08:15:55.609676 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs podName:0c7247ae-fc2e-42b0-8333-33093c37978e nodeName:}" failed. No retries permitted until 2026-02-19 08:16:11.609651599 +0000 UTC m=+929.266770547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs") pod "openstack-operator-controller-manager-c8dc87cd9-xrk5c" (UID: "0c7247ae-fc2e-42b0-8333-33093c37978e") : secret "webhook-server-cert" not found Feb 19 08:15:56 crc kubenswrapper[5023]: E0219 08:15:56.472934 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 19 08:15:56 crc kubenswrapper[5023]: E0219 08:15:56.473267 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8b7vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-5xq6x_openstack-operators(677afd79-73b0-45db-a513-6b77dfb09992): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:15:56 crc kubenswrapper[5023]: E0219 08:15:56.475391 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" podUID="677afd79-73b0-45db-a513-6b77dfb09992" Feb 19 08:15:57 crc kubenswrapper[5023]: E0219 08:15:57.055230 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" podUID="677afd79-73b0-45db-a513-6b77dfb09992" Feb 19 08:15:57 crc kubenswrapper[5023]: E0219 08:15:57.501021 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 19 08:15:57 crc kubenswrapper[5023]: E0219 08:15:57.501203 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzxt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-kjbpp_openstack-operators(486c209b-21d4-45cb-9b95-cb8d27df2ad1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:15:57 crc kubenswrapper[5023]: E0219 08:15:57.502497 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" podUID="486c209b-21d4-45cb-9b95-cb8d27df2ad1" Feb 19 08:15:58 crc kubenswrapper[5023]: E0219 08:15:58.084051 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" podUID="486c209b-21d4-45cb-9b95-cb8d27df2ad1" Feb 19 08:15:58 crc kubenswrapper[5023]: E0219 08:15:58.898818 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 19 08:15:58 crc kubenswrapper[5023]: E0219 08:15:58.899074 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqnkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-ks9rd_openstack-operators(b448df69-64f6-4ba5-9c1d-60d1ca582acb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:15:58 crc kubenswrapper[5023]: E0219 08:15:58.900332 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" podUID="b448df69-64f6-4ba5-9c1d-60d1ca582acb" Feb 19 08:15:59 crc kubenswrapper[5023]: E0219 08:15:59.098078 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" podUID="b448df69-64f6-4ba5-9c1d-60d1ca582acb" Feb 19 08:15:59 crc kubenswrapper[5023]: E0219 08:15:59.743775 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 08:15:59 crc kubenswrapper[5023]: E0219 08:15:59.744399 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79w4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-9rxg5_openstack-operators(17f2a3cb-6233-4f7f-b530-fb662f1aba34): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:15:59 crc kubenswrapper[5023]: E0219 08:15:59.746085 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" podUID="17f2a3cb-6233-4f7f-b530-fb662f1aba34" Feb 19 08:16:00 crc kubenswrapper[5023]: E0219 08:16:00.107727 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" podUID="17f2a3cb-6233-4f7f-b530-fb662f1aba34" Feb 19 08:16:00 crc kubenswrapper[5023]: E0219 08:16:00.421340 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 19 08:16:00 crc kubenswrapper[5023]: E0219 08:16:00.421590 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29rwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-wgs6h_openstack-operators(b73d7256-9139-4cbd-b7a7-7b4b3852aafb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:00 crc kubenswrapper[5023]: E0219 08:16:00.422834 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" podUID="b73d7256-9139-4cbd-b7a7-7b4b3852aafb" Feb 19 08:16:01 crc kubenswrapper[5023]: E0219 08:16:01.121444 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" podUID="b73d7256-9139-4cbd-b7a7-7b4b3852aafb" Feb 19 08:16:03 crc kubenswrapper[5023]: E0219 08:16:03.485603 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 19 08:16:03 crc kubenswrapper[5023]: E0219 08:16:03.486195 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wdrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-m2bd5_openstack-operators(8d91d728-e5b6-4f5e-81ad-158b96069d64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:03 crc kubenswrapper[5023]: E0219 08:16:03.488136 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" podUID="8d91d728-e5b6-4f5e-81ad-158b96069d64" Feb 19 08:16:04 crc kubenswrapper[5023]: E0219 08:16:04.145799 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" podUID="8d91d728-e5b6-4f5e-81ad-158b96069d64" Feb 19 08:16:05 crc kubenswrapper[5023]: E0219 08:16:05.551889 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 08:16:05 crc kubenswrapper[5023]: E0219 08:16:05.552470 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2kmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-58ml6_openstack-operators(e61f8f71-02fe-448d-a0ef-1d2290d558b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:05 crc kubenswrapper[5023]: E0219 08:16:05.553892 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" podUID="e61f8f71-02fe-448d-a0ef-1d2290d558b1" Feb 19 08:16:06 crc kubenswrapper[5023]: E0219 08:16:06.157115 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" podUID="e61f8f71-02fe-448d-a0ef-1d2290d558b1" Feb 19 08:16:07 crc kubenswrapper[5023]: I0219 08:16:07.059148 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:16:10 crc kubenswrapper[5023]: I0219 08:16:10.795747 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:16:10 crc kubenswrapper[5023]: I0219 08:16:10.806583 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61b3e902-e458-49b8-8924-fd607e116c1f-cert\") pod \"infra-operator-controller-manager-79d975b745-txbbh\" (UID: \"61b3e902-e458-49b8-8924-fd607e116c1f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:16:10 crc kubenswrapper[5023]: I0219 08:16:10.858997 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2rjrq" Feb 19 08:16:10 crc kubenswrapper[5023]: I0219 08:16:10.867775 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.710409 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.710827 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.728431 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-webhook-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.730272 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c7247ae-fc2e-42b0-8333-33093c37978e-metrics-certs\") pod \"openstack-operator-controller-manager-c8dc87cd9-xrk5c\" (UID: \"0c7247ae-fc2e-42b0-8333-33093c37978e\") " pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.748685 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2fjqm" Feb 19 08:16:11 crc kubenswrapper[5023]: I0219 08:16:11.754607 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:14 crc kubenswrapper[5023]: E0219 08:16:14.232584 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 08:16:14 crc kubenswrapper[5023]: E0219 08:16:14.234233 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfwch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-zwc8v_openstack-operators(e9e36838-6d27-4e7e-9619-e3cd7b304426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:14 crc kubenswrapper[5023]: E0219 08:16:14.237500 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podUID="e9e36838-6d27-4e7e-9619-e3cd7b304426" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.334952 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.335518 5023 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.335723 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6wh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5b6f75fc4-mhwht_openstack-operators(3a0054e7-bed9-4f62-a6d9-c460a32deeef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.336954 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.850150 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.850416 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldkhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nsz2f_openstack-operators(6e8405b6-2fae-404e-87c3-635d94cc4376): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:16:15 crc kubenswrapper[5023]: E0219 08:16:15.854705 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podUID="6e8405b6-2fae-404e-87c3-635d94cc4376" Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.216903 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c"] Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.264698 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerStarted","Data":"4891a5d5f24de1fbb1a2884cd52b3440e8e8495c7f0b48725dd057c0107e5e21"} Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.289534 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" event={"ID":"9719932b-2c04-47a0-97b8-492d4a5d297c","Type":"ContainerStarted","Data":"536be5f78ff23ec0dd9a1a1ce2e3b13614168e3ccc6346f83c80cfcdcfcdf82e"} Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.289682 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.323578 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" podStartSLOduration=12.651091444 podStartE2EDuration="38.323559822s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:39.838885903 +0000 UTC m=+897.496004851" lastFinishedPulling="2026-02-19 08:16:05.511354281 +0000 UTC m=+923.168473229" observedRunningTime="2026-02-19 08:16:16.322182455 +0000 UTC m=+933.979301413" watchObservedRunningTime="2026-02-19 08:16:16.323559822 +0000 UTC m=+933.980678780" Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.427499 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.444772 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9"] Feb 19 08:16:16 crc kubenswrapper[5023]: I0219 08:16:16.478873 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-txbbh"] Feb 19 08:16:16 crc kubenswrapper[5023]: E0219 08:16:16.502958 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc401de64_b8de_4d9d_b291_84a0806fe6bc.slice/crio-conmon-d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.310633 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" event={"ID":"f96cd850-d719-444c-8015-fdffb335df27","Type":"ContainerStarted","Data":"505ad9e8e3ab08e1633f8c19537a7d1aea50a88f3c4949c3a36b113d425550d7"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.310994 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.314833 5023 generic.go:334] "Generic (PLEG): container finished" podID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerID="e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1" exitCode=0 Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.315095 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerDied","Data":"e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.315125 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerStarted","Data":"70ec541358dca0b85ad0d28f0a4e25d745a0aaba6f740e8c14aa1c7b17641271"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.338867 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" podStartSLOduration=14.337088092 podStartE2EDuration="39.338832201s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.509759796 +0000 UTC m=+898.166878744" lastFinishedPulling="2026-02-19 08:16:05.511503905 +0000 UTC m=+923.168622853" observedRunningTime="2026-02-19 08:16:17.337745762 +0000 UTC m=+934.994864710" watchObservedRunningTime="2026-02-19 08:16:17.338832201 +0000 UTC m=+934.995951149" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.388826 5023 generic.go:334] "Generic (PLEG): container finished" podID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerID="d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7" exitCode=0 Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.388888 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerDied","Data":"d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.441836 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" event={"ID":"aa77cbbd-b043-472e-ba08-07c42e16d326","Type":"ContainerStarted","Data":"a2f84952f72f4f66e06d4a026f3b01c8086f2318bebdea7efb6b948a375018a6"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.465209 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.520343 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" event={"ID":"2d806bd1-886e-4643-a98e-856c74c803aa","Type":"ContainerStarted","Data":"8eab462ff37b94acd61571afff50eb293da8d1c76594f2e944d74eacfd4714a2"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.520631 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.566846 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" event={"ID":"314f00ab-6012-4663-b265-2df54d81511b","Type":"ContainerStarted","Data":"f9467ccc28f7ba94db0d04580b1f211ba6966bd9413255bd14414ba3ca79361e"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.566906 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.587919 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" podStartSLOduration=13.968906985 podStartE2EDuration="38.587899456s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.892576216 +0000 UTC m=+898.549695164" lastFinishedPulling="2026-02-19 08:16:05.511568677 +0000 UTC m=+923.168687635" observedRunningTime="2026-02-19 08:16:17.582552595 +0000 UTC m=+935.239671543" watchObservedRunningTime="2026-02-19 08:16:17.587899456 +0000 UTC m=+935.245018404" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.599879 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" podStartSLOduration=13.503876702 podStartE2EDuration="39.599855191s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.525717917 +0000 UTC m=+898.182836865" lastFinishedPulling="2026-02-19 08:16:06.621696406 +0000 UTC m=+924.278815354" observedRunningTime="2026-02-19 08:16:17.535861094 +0000 UTC m=+935.192980062" watchObservedRunningTime="2026-02-19 08:16:17.599855191 +0000 UTC m=+935.256974139" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.604310 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" event={"ID":"a396f869-bade-4ff1-9031-ac899d4d6ed2","Type":"ContainerStarted","Data":"75a8fea5d947b5ae74b274717c5d8b46c483becfc7feaec5e4a51c0a9cddcc3c"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.605259 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.632875 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" event={"ID":"61b3e902-e458-49b8-8924-fd607e116c1f","Type":"ContainerStarted","Data":"77bb48cb81f94aa2c7c5b7936ac5861a2ddfb20d9f39338b1aa450d0715018d4"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.666636 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" event={"ID":"b448df69-64f6-4ba5-9c1d-60d1ca582acb","Type":"ContainerStarted","Data":"0f6613ef63c23b5fd3a1ea3216c2b4dfedf17ccbf9904bd7d7f371653efb46ac"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.667461 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.706683 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" podStartSLOduration=15.104484348 podStartE2EDuration="39.706667486s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.909355598 +0000 UTC m=+898.566474546" lastFinishedPulling="2026-02-19 08:16:05.511538736 +0000 UTC m=+923.168657684" observedRunningTime="2026-02-19 08:16:17.615143604 +0000 UTC m=+935.272262552" watchObservedRunningTime="2026-02-19 08:16:17.706667486 +0000 UTC m=+935.363786434" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.708364 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" event={"ID":"d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d","Type":"ContainerStarted","Data":"2c8036f3ed98ddf6ddc67d0592c2fa350cc2963f141b9b865d72a94b48908f1f"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.708771 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.710813 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" event={"ID":"b73d7256-9139-4cbd-b7a7-7b4b3852aafb","Type":"ContainerStarted","Data":"8b81cb76a82db2e54f7473a24e11fdc5100eba92fe01a79e247f2fd769b54f80"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.711698 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.712523 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" podStartSLOduration=14.100702681 podStartE2EDuration="39.71249873s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:39.900141848 +0000 UTC m=+897.557260796" lastFinishedPulling="2026-02-19 08:16:05.511937897 +0000 UTC m=+923.169056845" observedRunningTime="2026-02-19 08:16:17.705418383 +0000 UTC m=+935.362537331" watchObservedRunningTime="2026-02-19 08:16:17.71249873 +0000 UTC m=+935.369617678" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.713197 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" event={"ID":"677afd79-73b0-45db-a513-6b77dfb09992","Type":"ContainerStarted","Data":"8530c75b5946727ac380c35b349e31de004584509063770395d52f4aa18500d5"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.713506 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.740630 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" event={"ID":"05d6abf5-ddc2-460e-8b10-252292257fdd","Type":"ContainerStarted","Data":"a32583ec19d65f50d46d52edd8b6405a2f4b316ad0d692f7372ac38e23a2a73f"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.741337 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.754351 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" podStartSLOduration=3.616807971 podStartE2EDuration="38.754324492s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="2026-02-19 08:15:41.046723369 +0000 UTC m=+898.703842317" lastFinishedPulling="2026-02-19 08:16:16.18423989 +0000 UTC m=+933.841358838" observedRunningTime="2026-02-19 08:16:17.740880208 +0000 UTC m=+935.397999166" watchObservedRunningTime="2026-02-19 08:16:17.754324492 +0000 UTC m=+935.411443440" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.757199 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" event={"ID":"7fc6e4db-1bd8-42ff-a64e-c4f356f80806","Type":"ContainerStarted","Data":"3cfea2639d3a16a8d6d29444ec89dba2d4ef4b8450fecc4ff5f3caffd72c7b17"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.776590 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" event={"ID":"0c7247ae-fc2e-42b0-8333-33093c37978e","Type":"ContainerStarted","Data":"e1cbe3d47d185b0f83b4e38dd97cc975a60359580bbb50ae310b0df6477d43d4"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.776657 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" event={"ID":"0c7247ae-fc2e-42b0-8333-33093c37978e","Type":"ContainerStarted","Data":"b08cd71d6bcbd9ea6c3d96999f067971b13ee6fbc30e50ec04a935f1936f1cef"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.777323 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.803054 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" event={"ID":"cdfff2ca-6dc1-4850-806d-7fb9195e276a","Type":"ContainerStarted","Data":"e262c19ede5d6f79acf66ba220ac94199fe3abe2f5d030a52ff66afe15eb3a98"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.803792 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.826848 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" event={"ID":"17f2a3cb-6233-4f7f-b530-fb662f1aba34","Type":"ContainerStarted","Data":"137efa072726598b7bc225da307919c4a5bfcdc2e17936d1d5d6eafecda177ae"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.827534 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.865919 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" event={"ID":"7b5a2508-a1ef-40f4-92c3-91aae50788ba","Type":"ContainerStarted","Data":"264e0862bc21b87d04f031e960f655dd57e9e28d53f63e2476fc5501a65c083b"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.866239 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.875992 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" podStartSLOduration=14.667409269 podStartE2EDuration="39.875966919s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.302830892 +0000 UTC m=+897.959949840" lastFinishedPulling="2026-02-19 08:16:05.511388542 +0000 UTC m=+923.168507490" observedRunningTime="2026-02-19 08:16:17.855645993 +0000 UTC m=+935.512764931" watchObservedRunningTime="2026-02-19 08:16:17.875966919 +0000 UTC m=+935.533085867" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.884905 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" podStartSLOduration=5.05987501 podStartE2EDuration="39.884870923s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:41.057035531 +0000 UTC m=+898.714154479" lastFinishedPulling="2026-02-19 08:16:15.882031444 +0000 UTC m=+933.539150392" observedRunningTime="2026-02-19 08:16:17.771256518 +0000 UTC m=+935.428375466" watchObservedRunningTime="2026-02-19 08:16:17.884870923 +0000 UTC m=+935.541989871" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.885164 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" event={"ID":"486c209b-21d4-45cb-9b95-cb8d27df2ad1","Type":"ContainerStarted","Data":"d7de4bfa63123f2da94ab2f0e7424d12a274d6381d3c26aec921ca69e584d17e"} Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.885553 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:16:17 crc kubenswrapper[5023]: I0219 08:16:17.938468 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" podStartSLOduration=3.5931672 podStartE2EDuration="39.938449985s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:39.839711225 +0000 UTC m=+897.496830173" lastFinishedPulling="2026-02-19 08:16:16.18499401 +0000 UTC m=+933.842112958" observedRunningTime="2026-02-19 08:16:17.935459526 +0000 UTC m=+935.592578474" watchObservedRunningTime="2026-02-19 08:16:17.938449985 +0000 UTC m=+935.595568933" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.032446 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" podStartSLOduration=4.313604529 podStartE2EDuration="40.032415622s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.464707218 +0000 UTC m=+898.121826166" lastFinishedPulling="2026-02-19 08:16:16.183518311 +0000 UTC m=+933.840637259" observedRunningTime="2026-02-19 08:16:17.986003719 +0000 UTC m=+935.643122667" watchObservedRunningTime="2026-02-19 08:16:18.032415622 +0000 UTC m=+935.689534570" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.052322 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" podStartSLOduration=16.372858879 podStartE2EDuration="40.052291716s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:39.839407177 +0000 UTC m=+897.496526125" lastFinishedPulling="2026-02-19 08:16:03.518840014 +0000 UTC m=+921.175958962" observedRunningTime="2026-02-19 08:16:18.03877675 +0000 UTC m=+935.695895698" watchObservedRunningTime="2026-02-19 08:16:18.052291716 +0000 UTC m=+935.709410664" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.086743 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" podStartSLOduration=14.477934331 podStartE2EDuration="39.086727384s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.900360741 +0000 UTC m=+898.557479689" lastFinishedPulling="2026-02-19 08:16:05.509153793 +0000 UTC m=+923.166272742" observedRunningTime="2026-02-19 08:16:18.081044974 +0000 UTC m=+935.738163922" watchObservedRunningTime="2026-02-19 08:16:18.086727384 +0000 UTC m=+935.743846332" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.111562 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" podStartSLOduration=4.72862238 podStartE2EDuration="40.111539658s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.693795697 +0000 UTC m=+898.350914645" lastFinishedPulling="2026-02-19 08:16:16.076712975 +0000 UTC m=+933.733831923" observedRunningTime="2026-02-19 08:16:18.110916791 +0000 UTC m=+935.768035739" watchObservedRunningTime="2026-02-19 08:16:18.111539658 +0000 UTC m=+935.768658606" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.177488 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" podStartSLOduration=39.177470175 podStartE2EDuration="39.177470175s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:16:18.173638544 +0000 UTC m=+935.830757492" watchObservedRunningTime="2026-02-19 08:16:18.177470175 +0000 UTC m=+935.834589123" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.224130 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" podStartSLOduration=4.711642292 podStartE2EDuration="40.224111665s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.669847576 +0000 UTC m=+898.326966524" lastFinishedPulling="2026-02-19 08:16:16.182316949 +0000 UTC m=+933.839435897" observedRunningTime="2026-02-19 08:16:18.215808396 +0000 UTC m=+935.872927344" watchObservedRunningTime="2026-02-19 08:16:18.224111665 +0000 UTC m=+935.881230613" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.927132 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerStarted","Data":"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14"} Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.937951 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" event={"ID":"8d91d728-e5b6-4f5e-81ad-158b96069d64","Type":"ContainerStarted","Data":"15a1dd77a528b139ce7dc13d6f524def641c7f4148b6c79bd168b69f8381d7fe"} Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.938147 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:16:18 crc kubenswrapper[5023]: I0219 08:16:18.946371 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerStarted","Data":"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf"} Feb 19 08:16:19 crc kubenswrapper[5023]: I0219 08:16:19.038826 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" podStartSLOduration=4.228146638 podStartE2EDuration="41.038805858s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.319391408 +0000 UTC m=+897.976510356" lastFinishedPulling="2026-02-19 08:16:17.130050628 +0000 UTC m=+934.787169576" observedRunningTime="2026-02-19 08:16:19.03582816 +0000 UTC m=+936.692947108" watchObservedRunningTime="2026-02-19 08:16:19.038805858 +0000 UTC m=+936.695924806" Feb 19 08:16:19 crc kubenswrapper[5023]: I0219 08:16:19.981994 5023 generic.go:334] "Generic (PLEG): container finished" podID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerID="caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14" exitCode=0 Feb 19 08:16:19 crc kubenswrapper[5023]: I0219 08:16:19.982132 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerDied","Data":"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14"} Feb 19 08:16:19 crc kubenswrapper[5023]: I0219 08:16:19.985218 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" event={"ID":"e61f8f71-02fe-448d-a0ef-1d2290d558b1","Type":"ContainerStarted","Data":"12d9c756dac89a08a6982788bbf0e5d439f4366545337a08a5975ac4d1c068d1"} Feb 19 08:16:20 crc kubenswrapper[5023]: I0219 08:16:20.030373 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" podStartSLOduration=3.337693099 podStartE2EDuration="42.030351583s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.318488775 +0000 UTC m=+897.975607723" lastFinishedPulling="2026-02-19 08:16:19.011147259 +0000 UTC m=+936.668266207" observedRunningTime="2026-02-19 08:16:20.02909954 +0000 UTC m=+937.686218488" watchObservedRunningTime="2026-02-19 08:16:20.030351583 +0000 UTC m=+937.687470541" Feb 19 08:16:20 crc kubenswrapper[5023]: I0219 08:16:20.993923 5023 generic.go:334] "Generic (PLEG): container finished" podID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerID="c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf" exitCode=0 Feb 19 08:16:20 crc kubenswrapper[5023]: I0219 08:16:20.993971 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerDied","Data":"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf"} Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.010110 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerStarted","Data":"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae"} Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.012409 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerStarted","Data":"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91"} Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.013989 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" event={"ID":"61b3e902-e458-49b8-8924-fd607e116c1f","Type":"ContainerStarted","Data":"996300be1ea284074dc910b171089aa46ba10bea31cfcecbfa63712ddc7f6965"} Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.014063 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.015158 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" event={"ID":"7fc6e4db-1bd8-42ff-a64e-c4f356f80806","Type":"ContainerStarted","Data":"6e3effad38d4994cc2707b2749cdb8425251fad89ef700ba71f24011fed15976"} Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.015355 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.034696 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9stlh" podStartSLOduration=34.234136359 podStartE2EDuration="39.034666969s" podCreationTimestamp="2026-02-19 08:15:44 +0000 UTC" firstStartedPulling="2026-02-19 08:16:17.337081045 +0000 UTC m=+934.994199993" lastFinishedPulling="2026-02-19 08:16:22.137611655 +0000 UTC m=+939.794730603" observedRunningTime="2026-02-19 08:16:23.03016868 +0000 UTC m=+940.687287618" watchObservedRunningTime="2026-02-19 08:16:23.034666969 +0000 UTC m=+940.691785927" Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.054996 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6m7ch" podStartSLOduration=35.193662659 podStartE2EDuration="39.054972534s" podCreationTimestamp="2026-02-19 08:15:44 +0000 UTC" firstStartedPulling="2026-02-19 08:16:17.408769444 +0000 UTC m=+935.065888392" lastFinishedPulling="2026-02-19 08:16:21.270079319 +0000 UTC m=+938.927198267" observedRunningTime="2026-02-19 08:16:23.052299584 +0000 UTC m=+940.709418532" watchObservedRunningTime="2026-02-19 08:16:23.054972534 +0000 UTC m=+940.712091482" Feb 19 08:16:23 crc kubenswrapper[5023]: I0219 08:16:23.084543 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" podStartSLOduration=39.491724812 podStartE2EDuration="45.084522943s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:16:16.543222762 +0000 UTC m=+934.200341710" lastFinishedPulling="2026-02-19 08:16:22.136020893 +0000 UTC m=+939.793139841" observedRunningTime="2026-02-19 08:16:23.077656742 +0000 UTC m=+940.734775710" watchObservedRunningTime="2026-02-19 08:16:23.084522943 +0000 UTC m=+940.741641891" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.353768 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.353873 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.380071 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.380121 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.408827 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.432203 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" podStartSLOduration=41.869534824 podStartE2EDuration="47.432181981s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:16:16.574390483 +0000 UTC m=+934.231509431" lastFinishedPulling="2026-02-19 08:16:22.13703764 +0000 UTC m=+939.794156588" observedRunningTime="2026-02-19 08:16:23.106383379 +0000 UTC m=+940.763502347" watchObservedRunningTime="2026-02-19 08:16:25.432181981 +0000 UTC m=+943.089300929" Feb 19 08:16:25 crc kubenswrapper[5023]: I0219 08:16:25.447362 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:25 crc kubenswrapper[5023]: E0219 08:16:25.481249 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podUID="e9e36838-6d27-4e7e-9619-e3cd7b304426" Feb 19 08:16:26 crc kubenswrapper[5023]: E0219 08:16:26.479590 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podUID="6e8405b6-2fae-404e-87c3-635d94cc4376" Feb 19 08:16:27 crc kubenswrapper[5023]: E0219 08:16:27.481964 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.194:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" Feb 19 08:16:28 crc kubenswrapper[5023]: I0219 08:16:28.927043 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5xq6x" Feb 19 08:16:28 crc kubenswrapper[5023]: I0219 08:16:28.955874 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jvqln" Feb 19 08:16:28 crc kubenswrapper[5023]: I0219 08:16:28.972190 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ppgdp" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.009140 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hsz4t" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.025271 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-s74tq" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.099199 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.100881 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-58ml6" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.228639 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9zksh" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.237513 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-m2bd5" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.332036 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9rxg5" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.340243 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lfj5q" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.367436 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wgs6h" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.429601 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-kjbpp" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.622926 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dfkgq" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.685932 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-jdlhp" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.760934 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9wcz4" Feb 19 08:16:29 crc kubenswrapper[5023]: I0219 08:16:29.856166 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-shhzj" Feb 19 08:16:30 crc kubenswrapper[5023]: I0219 08:16:30.083150 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-ks9rd" Feb 19 08:16:30 crc kubenswrapper[5023]: I0219 08:16:30.875251 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-txbbh" Feb 19 08:16:31 crc kubenswrapper[5023]: I0219 08:16:31.762590 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-c8dc87cd9-xrk5c" Feb 19 08:16:35 crc kubenswrapper[5023]: I0219 08:16:35.398881 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:35 crc kubenswrapper[5023]: I0219 08:16:35.434571 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:35 crc kubenswrapper[5023]: I0219 08:16:35.461813 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:16:35 crc kubenswrapper[5023]: I0219 08:16:35.514345 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.129179 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6m7ch" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="registry-server" containerID="cri-o://a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91" gracePeriod=2 Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.528049 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.648750 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities\") pod \"c401de64-b8de-4d9d-b291-84a0806fe6bc\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.648852 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content\") pod \"c401de64-b8de-4d9d-b291-84a0806fe6bc\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.648874 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4k2p\" (UniqueName: \"kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p\") pod \"c401de64-b8de-4d9d-b291-84a0806fe6bc\" (UID: \"c401de64-b8de-4d9d-b291-84a0806fe6bc\") " Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.649820 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities" (OuterVolumeSpecName: "utilities") pod "c401de64-b8de-4d9d-b291-84a0806fe6bc" (UID: "c401de64-b8de-4d9d-b291-84a0806fe6bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.654650 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p" (OuterVolumeSpecName: "kube-api-access-j4k2p") pod "c401de64-b8de-4d9d-b291-84a0806fe6bc" (UID: "c401de64-b8de-4d9d-b291-84a0806fe6bc"). InnerVolumeSpecName "kube-api-access-j4k2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.709512 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c401de64-b8de-4d9d-b291-84a0806fe6bc" (UID: "c401de64-b8de-4d9d-b291-84a0806fe6bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.750109 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.750143 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c401de64-b8de-4d9d-b291-84a0806fe6bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:36 crc kubenswrapper[5023]: I0219 08:16:36.750155 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4k2p\" (UniqueName: \"kubernetes.io/projected/c401de64-b8de-4d9d-b291-84a0806fe6bc-kube-api-access-j4k2p\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.047526 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.048182 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9stlh" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="registry-server" containerID="cri-o://3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae" gracePeriod=2 Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.138638 5023 generic.go:334] "Generic (PLEG): container finished" podID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerID="a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91" exitCode=0 Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.138684 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerDied","Data":"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91"} Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.138713 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m7ch" event={"ID":"c401de64-b8de-4d9d-b291-84a0806fe6bc","Type":"ContainerDied","Data":"4891a5d5f24de1fbb1a2884cd52b3440e8e8495c7f0b48725dd057c0107e5e21"} Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.138731 5023 scope.go:117] "RemoveContainer" containerID="a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.138740 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m7ch" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.163971 5023 scope.go:117] "RemoveContainer" containerID="caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.177762 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.182777 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6m7ch"] Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.199265 5023 scope.go:117] "RemoveContainer" containerID="d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.218822 5023 scope.go:117] "RemoveContainer" containerID="a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91" Feb 19 08:16:37 crc kubenswrapper[5023]: E0219 08:16:37.220345 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91\": container with ID starting with a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91 not found: ID does not exist" containerID="a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.220389 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91"} err="failed to get container status \"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91\": rpc error: code = NotFound desc = could not find container \"a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91\": container with ID starting with a50106edbcb8d49a4d36decb4a22dd6686b88a6b03e8862f9e36339fcb7a4d91 not found: ID does not exist" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.220420 5023 scope.go:117] "RemoveContainer" containerID="caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14" Feb 19 08:16:37 crc kubenswrapper[5023]: E0219 08:16:37.220816 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14\": container with ID starting with caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14 not found: ID does not exist" containerID="caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.220926 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14"} err="failed to get container status \"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14\": rpc error: code = NotFound desc = could not find container \"caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14\": container with ID starting with caa3002b945c61d3cb3cd53b6ce7afca0355bba0d8c08ef9aef1c238a5e26d14 not found: ID does not exist" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.220959 5023 scope.go:117] "RemoveContainer" containerID="d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7" Feb 19 08:16:37 crc kubenswrapper[5023]: E0219 08:16:37.221224 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7\": container with ID starting with d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7 not found: ID does not exist" containerID="d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.221242 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7"} err="failed to get container status \"d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7\": rpc error: code = NotFound desc = could not find container \"d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7\": container with ID starting with d1b8f1b1a971f25ff565f888f0cc27e0416967e089bd9bf81b6898250abdf6d7 not found: ID does not exist" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.430706 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.501095 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" path="/var/lib/kubelet/pods/c401de64-b8de-4d9d-b291-84a0806fe6bc/volumes" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.563285 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlh6b\" (UniqueName: \"kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b\") pod \"817b2931-4fce-4f48-b2f5-cf2daed7e421\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.563392 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities\") pod \"817b2931-4fce-4f48-b2f5-cf2daed7e421\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.563411 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content\") pod \"817b2931-4fce-4f48-b2f5-cf2daed7e421\" (UID: \"817b2931-4fce-4f48-b2f5-cf2daed7e421\") " Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.564904 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities" (OuterVolumeSpecName: "utilities") pod "817b2931-4fce-4f48-b2f5-cf2daed7e421" (UID: "817b2931-4fce-4f48-b2f5-cf2daed7e421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.569191 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b" (OuterVolumeSpecName: "kube-api-access-xlh6b") pod "817b2931-4fce-4f48-b2f5-cf2daed7e421" (UID: "817b2931-4fce-4f48-b2f5-cf2daed7e421"). InnerVolumeSpecName "kube-api-access-xlh6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.611747 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "817b2931-4fce-4f48-b2f5-cf2daed7e421" (UID: "817b2931-4fce-4f48-b2f5-cf2daed7e421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.666067 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlh6b\" (UniqueName: \"kubernetes.io/projected/817b2931-4fce-4f48-b2f5-cf2daed7e421-kube-api-access-xlh6b\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.666868 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:37 crc kubenswrapper[5023]: I0219 08:16:37.667541 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817b2931-4fce-4f48-b2f5-cf2daed7e421-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.157427 5023 generic.go:334] "Generic (PLEG): container finished" podID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerID="3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae" exitCode=0 Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.157502 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9stlh" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.157516 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerDied","Data":"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae"} Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.157558 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9stlh" event={"ID":"817b2931-4fce-4f48-b2f5-cf2daed7e421","Type":"ContainerDied","Data":"70ec541358dca0b85ad0d28f0a4e25d745a0aaba6f740e8c14aa1c7b17641271"} Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.157578 5023 scope.go:117] "RemoveContainer" containerID="3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.198469 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.198529 5023 scope.go:117] "RemoveContainer" containerID="c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.206534 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9stlh"] Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.228732 5023 scope.go:117] "RemoveContainer" containerID="e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.262213 5023 scope.go:117] "RemoveContainer" containerID="3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae" Feb 19 08:16:38 crc kubenswrapper[5023]: E0219 08:16:38.262779 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae\": container with ID starting with 3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae not found: ID does not exist" containerID="3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.262814 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae"} err="failed to get container status \"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae\": rpc error: code = NotFound desc = could not find container \"3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae\": container with ID starting with 3f54e9506f38298433c5c5fc804ba058fbd7eccacc6fc4ceeda3af19d85b19ae not found: ID does not exist" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.262838 5023 scope.go:117] "RemoveContainer" containerID="c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf" Feb 19 08:16:38 crc kubenswrapper[5023]: E0219 08:16:38.263399 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf\": container with ID starting with c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf not found: ID does not exist" containerID="c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.263459 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf"} err="failed to get container status \"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf\": rpc error: code = NotFound desc = could not find container \"c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf\": container with ID starting with c8acf49c3075d071e647d15aa007ccd16f61f20aabc2918c10f7e8d61e1fbfcf not found: ID does not exist" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.263497 5023 scope.go:117] "RemoveContainer" containerID="e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1" Feb 19 08:16:38 crc kubenswrapper[5023]: E0219 08:16:38.263864 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1\": container with ID starting with e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1 not found: ID does not exist" containerID="e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1" Feb 19 08:16:38 crc kubenswrapper[5023]: I0219 08:16:38.263914 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1"} err="failed to get container status \"e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1\": rpc error: code = NotFound desc = could not find container \"e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1\": container with ID starting with e2a3b88fc828d8e251f2c20445d6a629cab8a8007954396c0760ccb3947345e1 not found: ID does not exist" Feb 19 08:16:39 crc kubenswrapper[5023]: I0219 08:16:39.486525 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" path="/var/lib/kubelet/pods/817b2931-4fce-4f48-b2f5-cf2daed7e421/volumes" Feb 19 08:16:41 crc kubenswrapper[5023]: I0219 08:16:41.188798 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" event={"ID":"e9e36838-6d27-4e7e-9619-e3cd7b304426","Type":"ContainerStarted","Data":"9786b0da581972ecc2c59733d676a7f52387114a705d26fee606242b3df68044"} Feb 19 08:16:41 crc kubenswrapper[5023]: I0219 08:16:41.189672 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:16:41 crc kubenswrapper[5023]: I0219 08:16:41.219556 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" podStartSLOduration=3.196138147 podStartE2EDuration="1m3.219537073s" podCreationTimestamp="2026-02-19 08:15:38 +0000 UTC" firstStartedPulling="2026-02-19 08:15:40.950366839 +0000 UTC m=+898.607485787" lastFinishedPulling="2026-02-19 08:16:40.973765765 +0000 UTC m=+958.630884713" observedRunningTime="2026-02-19 08:16:41.215779864 +0000 UTC m=+958.872898812" watchObservedRunningTime="2026-02-19 08:16:41.219537073 +0000 UTC m=+958.876656021" Feb 19 08:16:42 crc kubenswrapper[5023]: I0219 08:16:42.196808 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" event={"ID":"3a0054e7-bed9-4f62-a6d9-c460a32deeef","Type":"ContainerStarted","Data":"e9b16f566d8203f13ab0a9c8c818975626ed051956f8fb0886df15fd9143d8bb"} Feb 19 08:16:42 crc kubenswrapper[5023]: I0219 08:16:42.198097 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:16:42 crc kubenswrapper[5023]: I0219 08:16:42.198698 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" event={"ID":"6e8405b6-2fae-404e-87c3-635d94cc4376","Type":"ContainerStarted","Data":"171d3a794e609e37d126c404097de23a2bebdf4421debe992e8c496fb82f49f9"} Feb 19 08:16:42 crc kubenswrapper[5023]: I0219 08:16:42.218201 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podStartSLOduration=2.7455087259999997 podStartE2EDuration="1m3.218182444s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="2026-02-19 08:15:41.068704908 +0000 UTC m=+898.725823856" lastFinishedPulling="2026-02-19 08:16:41.541378626 +0000 UTC m=+959.198497574" observedRunningTime="2026-02-19 08:16:42.214451316 +0000 UTC m=+959.871570264" watchObservedRunningTime="2026-02-19 08:16:42.218182444 +0000 UTC m=+959.875301392" Feb 19 08:16:42 crc kubenswrapper[5023]: I0219 08:16:42.234288 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nsz2f" podStartSLOduration=3.163690109 podStartE2EDuration="1m3.234266508s" podCreationTimestamp="2026-02-19 08:15:39 +0000 UTC" firstStartedPulling="2026-02-19 08:15:41.08202483 +0000 UTC m=+898.739143778" lastFinishedPulling="2026-02-19 08:16:41.152601229 +0000 UTC m=+958.809720177" observedRunningTime="2026-02-19 08:16:42.231641719 +0000 UTC m=+959.888760667" watchObservedRunningTime="2026-02-19 08:16:42.234266508 +0000 UTC m=+959.891385466" Feb 19 08:16:49 crc kubenswrapper[5023]: I0219 08:16:49.396750 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-zwc8v" Feb 19 08:16:49 crc kubenswrapper[5023]: I0219 08:16:49.874677 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.090045 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.090518 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" containerName="manager" containerID="cri-o://e9b16f566d8203f13ab0a9c8c818975626ed051956f8fb0886df15fd9143d8bb" gracePeriod=10 Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.362704 5023 generic.go:334] "Generic (PLEG): container finished" podID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" containerID="e9b16f566d8203f13ab0a9c8c818975626ed051956f8fb0886df15fd9143d8bb" exitCode=0 Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.362745 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" event={"ID":"3a0054e7-bed9-4f62-a6d9-c460a32deeef","Type":"ContainerDied","Data":"e9b16f566d8203f13ab0a9c8c818975626ed051956f8fb0886df15fd9143d8bb"} Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.528005 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.636197 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6wh9\" (UniqueName: \"kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9\") pod \"3a0054e7-bed9-4f62-a6d9-c460a32deeef\" (UID: \"3a0054e7-bed9-4f62-a6d9-c460a32deeef\") " Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.641591 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9" (OuterVolumeSpecName: "kube-api-access-z6wh9") pod "3a0054e7-bed9-4f62-a6d9-c460a32deeef" (UID: "3a0054e7-bed9-4f62-a6d9-c460a32deeef"). InnerVolumeSpecName "kube-api-access-z6wh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.737920 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6wh9\" (UniqueName: \"kubernetes.io/projected/3a0054e7-bed9-4f62-a6d9-c460a32deeef-kube-api-access-z6wh9\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806105 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806451 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806476 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806491 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="extract-utilities" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806499 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="extract-utilities" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806524 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="extract-content" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806535 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="extract-content" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806547 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="extract-content" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806555 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="extract-content" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806568 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="extract-utilities" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806575 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="extract-utilities" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806583 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" containerName="manager" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806592 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" containerName="manager" Feb 19 08:16:54 crc kubenswrapper[5023]: E0219 08:16:54.806605 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806612 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806807 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="817b2931-4fce-4f48-b2f5-cf2daed7e421" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806824 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c401de64-b8de-4d9d-b291-84a0806fe6bc" containerName="registry-server" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.806835 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" containerName="manager" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.807432 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.814018 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:54 crc kubenswrapper[5023]: I0219 08:16:54.939697 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcx45\" (UniqueName: \"kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45\") pod \"watcher-operator-controller-manager-5b6f75fc4-htdcv\" (UID: \"48e128ff-38e3-4713-bc18-4925fbc2a388\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.041749 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcx45\" (UniqueName: \"kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45\") pod \"watcher-operator-controller-manager-5b6f75fc4-htdcv\" (UID: \"48e128ff-38e3-4713-bc18-4925fbc2a388\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.072513 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcx45\" (UniqueName: \"kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45\") pod \"watcher-operator-controller-manager-5b6f75fc4-htdcv\" (UID: \"48e128ff-38e3-4713-bc18-4925fbc2a388\") " pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.127449 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.371977 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" event={"ID":"3a0054e7-bed9-4f62-a6d9-c460a32deeef","Type":"ContainerDied","Data":"74939aeb93a22c864088ff8b560baa999a4d3debb71eaf4bdddd295c55234a0a"} Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.372096 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.372275 5023 scope.go:117] "RemoveContainer" containerID="e9b16f566d8203f13ab0a9c8c818975626ed051956f8fb0886df15fd9143d8bb" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.409193 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.414626 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-mhwht"] Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.485516 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0054e7-bed9-4f62-a6d9-c460a32deeef" path="/var/lib/kubelet/pods/3a0054e7-bed9-4f62-a6d9-c460a32deeef/volumes" Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.560312 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:55 crc kubenswrapper[5023]: I0219 08:16:55.762656 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.380776 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" event={"ID":"48e128ff-38e3-4713-bc18-4925fbc2a388","Type":"ContainerStarted","Data":"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c"} Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.382105 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.382190 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" event={"ID":"48e128ff-38e3-4713-bc18-4925fbc2a388","Type":"ContainerStarted","Data":"18257239535ce6065f711dda463e1c23417b3d2f451ddf0f0249becd29fc5d50"} Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.403795 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" podStartSLOduration=2.403773501 podStartE2EDuration="2.403773501s" podCreationTimestamp="2026-02-19 08:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:16:56.397438973 +0000 UTC m=+974.054557921" watchObservedRunningTime="2026-02-19 08:16:56.403773501 +0000 UTC m=+974.060892439" Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.805896 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:16:56 crc kubenswrapper[5023]: I0219 08:16:56.806875 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" podUID="050217a8-a68f-46d3-bad5-aab926acbb4a" containerName="operator" containerID="cri-o://3bfb08e07fb12b59401e60326f73f450324558b92c36187a92af5861612e46b4" gracePeriod=10 Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.389408 5023 generic.go:334] "Generic (PLEG): container finished" podID="050217a8-a68f-46d3-bad5-aab926acbb4a" containerID="3bfb08e07fb12b59401e60326f73f450324558b92c36187a92af5861612e46b4" exitCode=0 Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.389459 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" event={"ID":"050217a8-a68f-46d3-bad5-aab926acbb4a","Type":"ContainerDied","Data":"3bfb08e07fb12b59401e60326f73f450324558b92c36187a92af5861612e46b4"} Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.389762 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" event={"ID":"050217a8-a68f-46d3-bad5-aab926acbb4a","Type":"ContainerDied","Data":"2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8"} Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.389775 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1a90f012a134958748071a75e13ed3e4d98f26158c40cc8aba29c5b05626f8" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.389875 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" podUID="48e128ff-38e3-4713-bc18-4925fbc2a388" containerName="manager" containerID="cri-o://009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c" gracePeriod=10 Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.471774 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.585702 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftfnr\" (UniqueName: \"kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr\") pod \"050217a8-a68f-46d3-bad5-aab926acbb4a\" (UID: \"050217a8-a68f-46d3-bad5-aab926acbb4a\") " Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.602913 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr" (OuterVolumeSpecName: "kube-api-access-ftfnr") pod "050217a8-a68f-46d3-bad5-aab926acbb4a" (UID: "050217a8-a68f-46d3-bad5-aab926acbb4a"). InnerVolumeSpecName "kube-api-access-ftfnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.689800 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftfnr\" (UniqueName: \"kubernetes.io/projected/050217a8-a68f-46d3-bad5-aab926acbb4a-kube-api-access-ftfnr\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.766921 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.892955 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcx45\" (UniqueName: \"kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45\") pod \"48e128ff-38e3-4713-bc18-4925fbc2a388\" (UID: \"48e128ff-38e3-4713-bc18-4925fbc2a388\") " Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.895701 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45" (OuterVolumeSpecName: "kube-api-access-mcx45") pod "48e128ff-38e3-4713-bc18-4925fbc2a388" (UID: "48e128ff-38e3-4713-bc18-4925fbc2a388"). InnerVolumeSpecName "kube-api-access-mcx45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:16:57 crc kubenswrapper[5023]: I0219 08:16:57.994129 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcx45\" (UniqueName: \"kubernetes.io/projected/48e128ff-38e3-4713-bc18-4925fbc2a388-kube-api-access-mcx45\") on node \"crc\" DevicePath \"\"" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400405 5023 generic.go:334] "Generic (PLEG): container finished" podID="48e128ff-38e3-4713-bc18-4925fbc2a388" containerID="009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c" exitCode=0 Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400485 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400512 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" event={"ID":"48e128ff-38e3-4713-bc18-4925fbc2a388","Type":"ContainerDied","Data":"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c"} Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400556 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv" event={"ID":"48e128ff-38e3-4713-bc18-4925fbc2a388","Type":"ContainerDied","Data":"18257239535ce6065f711dda463e1c23417b3d2f451ddf0f0249becd29fc5d50"} Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400575 5023 scope.go:117] "RemoveContainer" containerID="009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.400503 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.419177 5023 scope.go:117] "RemoveContainer" containerID="009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c" Feb 19 08:16:58 crc kubenswrapper[5023]: E0219 08:16:58.419606 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c\": container with ID starting with 009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c not found: ID does not exist" containerID="009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.419672 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c"} err="failed to get container status \"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c\": rpc error: code = NotFound desc = could not find container \"009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c\": container with ID starting with 009f26b6c664387acd90c57760d29fe029651bf8938787a269b4112ed16cda7c not found: ID does not exist" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.436411 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.449020 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-bbb967fcc-6924r"] Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.456101 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.469834 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b6f75fc4-htdcv"] Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.891405 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:16:58 crc kubenswrapper[5023]: E0219 08:16:58.891756 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050217a8-a68f-46d3-bad5-aab926acbb4a" containerName="operator" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.891769 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="050217a8-a68f-46d3-bad5-aab926acbb4a" containerName="operator" Feb 19 08:16:58 crc kubenswrapper[5023]: E0219 08:16:58.891799 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e128ff-38e3-4713-bc18-4925fbc2a388" containerName="manager" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.891805 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e128ff-38e3-4713-bc18-4925fbc2a388" containerName="manager" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.891937 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="050217a8-a68f-46d3-bad5-aab926acbb4a" containerName="operator" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.891949 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e128ff-38e3-4713-bc18-4925fbc2a388" containerName="manager" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.892492 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.895046 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-vc58p" Feb 19 08:16:58 crc kubenswrapper[5023]: I0219 08:16:58.899108 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.007575 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7smj\" (UniqueName: \"kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj\") pod \"watcher-operator-index-xc78g\" (UID: \"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee\") " pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.109353 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7smj\" (UniqueName: \"kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj\") pod \"watcher-operator-index-xc78g\" (UID: \"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee\") " pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.135394 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7smj\" (UniqueName: \"kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj\") pod \"watcher-operator-index-xc78g\" (UID: \"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee\") " pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.208456 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.489505 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050217a8-a68f-46d3-bad5-aab926acbb4a" path="/var/lib/kubelet/pods/050217a8-a68f-46d3-bad5-aab926acbb4a/volumes" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.490239 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e128ff-38e3-4713-bc18-4925fbc2a388" path="/var/lib/kubelet/pods/48e128ff-38e3-4713-bc18-4925fbc2a388/volumes" Feb 19 08:16:59 crc kubenswrapper[5023]: I0219 08:16:59.590005 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:16:59 crc kubenswrapper[5023]: W0219 08:16:59.610201 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c11f969_0e9f_4fc6_98f3_78caf1c7f4ee.slice/crio-0c7ed4e31b2949db6c71003689ba176a4fab988297edeacac600a6835706dcd4 WatchSource:0}: Error finding container 0c7ed4e31b2949db6c71003689ba176a4fab988297edeacac600a6835706dcd4: Status 404 returned error can't find the container with id 0c7ed4e31b2949db6c71003689ba176a4fab988297edeacac600a6835706dcd4 Feb 19 08:17:00 crc kubenswrapper[5023]: I0219 08:17:00.435548 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xc78g" event={"ID":"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee","Type":"ContainerStarted","Data":"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015"} Feb 19 08:17:00 crc kubenswrapper[5023]: I0219 08:17:00.435833 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xc78g" event={"ID":"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee","Type":"ContainerStarted","Data":"0c7ed4e31b2949db6c71003689ba176a4fab988297edeacac600a6835706dcd4"} Feb 19 08:17:00 crc kubenswrapper[5023]: I0219 08:17:00.458417 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-xc78g" podStartSLOduration=2.237766753 podStartE2EDuration="2.458392793s" podCreationTimestamp="2026-02-19 08:16:58 +0000 UTC" firstStartedPulling="2026-02-19 08:16:59.612400404 +0000 UTC m=+977.269519352" lastFinishedPulling="2026-02-19 08:16:59.833026454 +0000 UTC m=+977.490145392" observedRunningTime="2026-02-19 08:17:00.450794402 +0000 UTC m=+978.107913350" watchObservedRunningTime="2026-02-19 08:17:00.458392793 +0000 UTC m=+978.115511741" Feb 19 08:17:02 crc kubenswrapper[5023]: I0219 08:17:02.481573 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:17:02 crc kubenswrapper[5023]: I0219 08:17:02.483076 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-xc78g" podUID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" containerName="registry-server" containerID="cri-o://6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015" gracePeriod=2 Feb 19 08:17:02 crc kubenswrapper[5023]: I0219 08:17:02.960574 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.062689 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7smj\" (UniqueName: \"kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj\") pod \"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee\" (UID: \"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee\") " Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.070738 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj" (OuterVolumeSpecName: "kube-api-access-v7smj") pod "4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" (UID: "4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee"). InnerVolumeSpecName "kube-api-access-v7smj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.097386 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-n6z9c"] Feb 19 08:17:03 crc kubenswrapper[5023]: E0219 08:17:03.097941 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" containerName="registry-server" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.097971 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" containerName="registry-server" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.098233 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" containerName="registry-server" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.099000 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.103053 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-n6z9c"] Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.165040 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/47450b8f-2238-4432-9048-92cd1bb2a290-kube-api-access-r6rht\") pod \"watcher-operator-index-n6z9c\" (UID: \"47450b8f-2238-4432-9048-92cd1bb2a290\") " pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.165240 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7smj\" (UniqueName: \"kubernetes.io/projected/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee-kube-api-access-v7smj\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.267027 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/47450b8f-2238-4432-9048-92cd1bb2a290-kube-api-access-r6rht\") pod \"watcher-operator-index-n6z9c\" (UID: \"47450b8f-2238-4432-9048-92cd1bb2a290\") " pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.294444 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/47450b8f-2238-4432-9048-92cd1bb2a290-kube-api-access-r6rht\") pod \"watcher-operator-index-n6z9c\" (UID: \"47450b8f-2238-4432-9048-92cd1bb2a290\") " pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.420832 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.462437 5023 generic.go:334] "Generic (PLEG): container finished" podID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" containerID="6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015" exitCode=0 Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.462671 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-xc78g" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.462733 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xc78g" event={"ID":"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee","Type":"ContainerDied","Data":"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015"} Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.462829 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-xc78g" event={"ID":"4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee","Type":"ContainerDied","Data":"0c7ed4e31b2949db6c71003689ba176a4fab988297edeacac600a6835706dcd4"} Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.462868 5023 scope.go:117] "RemoveContainer" containerID="6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.492916 5023 scope.go:117] "RemoveContainer" containerID="6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015" Feb 19 08:17:03 crc kubenswrapper[5023]: E0219 08:17:03.494037 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015\": container with ID starting with 6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015 not found: ID does not exist" containerID="6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.494135 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015"} err="failed to get container status \"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015\": rpc error: code = NotFound desc = could not find container \"6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015\": container with ID starting with 6ad43eba0e0112a1e459d5a597a90134cc415cef3c3663bae21020eb6ea1b015 not found: ID does not exist" Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.503889 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.510803 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-xc78g"] Feb 19 08:17:03 crc kubenswrapper[5023]: I0219 08:17:03.875855 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-n6z9c"] Feb 19 08:17:04 crc kubenswrapper[5023]: I0219 08:17:04.471596 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-n6z9c" event={"ID":"47450b8f-2238-4432-9048-92cd1bb2a290","Type":"ContainerStarted","Data":"5a72a69228d8e49a29d512e8dfc3fd8b861d478a98b976d7eb56477a19e9140f"} Feb 19 08:17:04 crc kubenswrapper[5023]: I0219 08:17:04.471868 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-n6z9c" event={"ID":"47450b8f-2238-4432-9048-92cd1bb2a290","Type":"ContainerStarted","Data":"599363f37d29acda6fc5ff1ff4466964a8fad801e0e49e5000566a9262b9f483"} Feb 19 08:17:04 crc kubenswrapper[5023]: I0219 08:17:04.496640 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-n6z9c" podStartSLOduration=1.381548631 podStartE2EDuration="1.496599891s" podCreationTimestamp="2026-02-19 08:17:03 +0000 UTC" firstStartedPulling="2026-02-19 08:17:03.887668496 +0000 UTC m=+981.544787444" lastFinishedPulling="2026-02-19 08:17:04.002719756 +0000 UTC m=+981.659838704" observedRunningTime="2026-02-19 08:17:04.489271577 +0000 UTC m=+982.146390525" watchObservedRunningTime="2026-02-19 08:17:04.496599891 +0000 UTC m=+982.153718839" Feb 19 08:17:05 crc kubenswrapper[5023]: I0219 08:17:05.484990 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee" path="/var/lib/kubelet/pods/4c11f969-0e9f-4fc6-98f3-78caf1c7f4ee/volumes" Feb 19 08:17:11 crc kubenswrapper[5023]: I0219 08:17:11.870373 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:17:11 crc kubenswrapper[5023]: I0219 08:17:11.870989 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:17:13 crc kubenswrapper[5023]: I0219 08:17:13.421347 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:13 crc kubenswrapper[5023]: I0219 08:17:13.421415 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:13 crc kubenswrapper[5023]: I0219 08:17:13.450063 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:13 crc kubenswrapper[5023]: I0219 08:17:13.555967 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-n6z9c" Feb 19 08:17:15 crc kubenswrapper[5023]: I0219 08:17:15.932408 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv"] Feb 19 08:17:15 crc kubenswrapper[5023]: I0219 08:17:15.934797 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:15 crc kubenswrapper[5023]: I0219 08:17:15.938310 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wlrcz" Feb 19 08:17:15 crc kubenswrapper[5023]: I0219 08:17:15.962307 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv"] Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.061799 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.061875 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfr8m\" (UniqueName: \"kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.062180 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.163327 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.163403 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfr8m\" (UniqueName: \"kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.163509 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.163919 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.164023 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.196842 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfr8m\" (UniqueName: \"kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m\") pod \"6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.266911 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:16 crc kubenswrapper[5023]: I0219 08:17:16.697516 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv"] Feb 19 08:17:16 crc kubenswrapper[5023]: W0219 08:17:16.707608 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d838e58_d185_465a_8999_7e2c9c572719.slice/crio-a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571 WatchSource:0}: Error finding container a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571: Status 404 returned error can't find the container with id a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571 Feb 19 08:17:17 crc kubenswrapper[5023]: I0219 08:17:17.561451 5023 generic.go:334] "Generic (PLEG): container finished" podID="5d838e58-d185-465a-8999-7e2c9c572719" containerID="5d0b41e4b146adf081f8f4db00ef87e10f095618ad9904a5f70336afe2262fec" exitCode=0 Feb 19 08:17:17 crc kubenswrapper[5023]: I0219 08:17:17.561497 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" event={"ID":"5d838e58-d185-465a-8999-7e2c9c572719","Type":"ContainerDied","Data":"5d0b41e4b146adf081f8f4db00ef87e10f095618ad9904a5f70336afe2262fec"} Feb 19 08:17:17 crc kubenswrapper[5023]: I0219 08:17:17.561726 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" event={"ID":"5d838e58-d185-465a-8999-7e2c9c572719","Type":"ContainerStarted","Data":"a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571"} Feb 19 08:17:18 crc kubenswrapper[5023]: I0219 08:17:18.571506 5023 generic.go:334] "Generic (PLEG): container finished" podID="5d838e58-d185-465a-8999-7e2c9c572719" containerID="11ad4d9911d8c108829b90cb2796b7b25c49d12173bbe79c37c8dcd42cb4ae27" exitCode=0 Feb 19 08:17:18 crc kubenswrapper[5023]: I0219 08:17:18.571550 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" event={"ID":"5d838e58-d185-465a-8999-7e2c9c572719","Type":"ContainerDied","Data":"11ad4d9911d8c108829b90cb2796b7b25c49d12173bbe79c37c8dcd42cb4ae27"} Feb 19 08:17:19 crc kubenswrapper[5023]: I0219 08:17:19.580851 5023 generic.go:334] "Generic (PLEG): container finished" podID="5d838e58-d185-465a-8999-7e2c9c572719" containerID="b461a5e52dda31ad4d60d755c153efecc1cf2d2d2f59871c14eb32522a4f6e5a" exitCode=0 Feb 19 08:17:19 crc kubenswrapper[5023]: I0219 08:17:19.580960 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" event={"ID":"5d838e58-d185-465a-8999-7e2c9c572719","Type":"ContainerDied","Data":"b461a5e52dda31ad4d60d755c153efecc1cf2d2d2f59871c14eb32522a4f6e5a"} Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.875376 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.932492 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle\") pod \"5d838e58-d185-465a-8999-7e2c9c572719\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.932635 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfr8m\" (UniqueName: \"kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m\") pod \"5d838e58-d185-465a-8999-7e2c9c572719\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.932732 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util\") pod \"5d838e58-d185-465a-8999-7e2c9c572719\" (UID: \"5d838e58-d185-465a-8999-7e2c9c572719\") " Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.934175 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle" (OuterVolumeSpecName: "bundle") pod "5d838e58-d185-465a-8999-7e2c9c572719" (UID: "5d838e58-d185-465a-8999-7e2c9c572719"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.939988 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m" (OuterVolumeSpecName: "kube-api-access-wfr8m") pod "5d838e58-d185-465a-8999-7e2c9c572719" (UID: "5d838e58-d185-465a-8999-7e2c9c572719"). InnerVolumeSpecName "kube-api-access-wfr8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:17:20 crc kubenswrapper[5023]: I0219 08:17:20.956881 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util" (OuterVolumeSpecName: "util") pod "5d838e58-d185-465a-8999-7e2c9c572719" (UID: "5d838e58-d185-465a-8999-7e2c9c572719"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.034023 5023 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-util\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.034055 5023 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d838e58-d185-465a-8999-7e2c9c572719-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.034065 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfr8m\" (UniqueName: \"kubernetes.io/projected/5d838e58-d185-465a-8999-7e2c9c572719-kube-api-access-wfr8m\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.603570 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" event={"ID":"5d838e58-d185-465a-8999-7e2c9c572719","Type":"ContainerDied","Data":"a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571"} Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.603655 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1154ad340b1b7e6588764673a27ad9d6628321151c079879040a0d1952ba571" Feb 19 08:17:21 crc kubenswrapper[5023]: I0219 08:17:21.603754 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.313814 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:24 crc kubenswrapper[5023]: E0219 08:17:24.314481 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="extract" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.314498 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="extract" Feb 19 08:17:24 crc kubenswrapper[5023]: E0219 08:17:24.314517 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="pull" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.314525 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="pull" Feb 19 08:17:24 crc kubenswrapper[5023]: E0219 08:17:24.314541 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="util" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.314550 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="util" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.314793 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d838e58-d185-465a-8999-7e2c9c572719" containerName="extract" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.315420 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.318471 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.321217 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m9xjx" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.327501 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.380574 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlcfr\" (UniqueName: \"kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.380698 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.380727 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.482310 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlcfr\" (UniqueName: \"kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.482400 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.482431 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.488133 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.496754 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.500507 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlcfr\" (UniqueName: \"kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr\") pod \"watcher-operator-controller-manager-6c7c8c5885-6pz4x\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:24 crc kubenswrapper[5023]: I0219 08:17:24.653657 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:25 crc kubenswrapper[5023]: I0219 08:17:25.086313 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:25 crc kubenswrapper[5023]: I0219 08:17:25.640206 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" event={"ID":"457d8e5a-68d2-4807-ada4-a63013df8594","Type":"ContainerStarted","Data":"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42"} Feb 19 08:17:25 crc kubenswrapper[5023]: I0219 08:17:25.640560 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" event={"ID":"457d8e5a-68d2-4807-ada4-a63013df8594","Type":"ContainerStarted","Data":"b80785279c7e0b9ec42e4ab7479d43b9a40abc2ba7cf585cf8ec804a4e011f67"} Feb 19 08:17:25 crc kubenswrapper[5023]: I0219 08:17:25.640591 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:25 crc kubenswrapper[5023]: I0219 08:17:25.660083 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" podStartSLOduration=1.660061649 podStartE2EDuration="1.660061649s" podCreationTimestamp="2026-02-19 08:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:17:25.656538905 +0000 UTC m=+1003.313657873" watchObservedRunningTime="2026-02-19 08:17:25.660061649 +0000 UTC m=+1003.317180597" Feb 19 08:17:34 crc kubenswrapper[5023]: I0219 08:17:34.659528 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.323652 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk"] Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.325149 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.390989 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk"] Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.469171 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvwr\" (UniqueName: \"kubernetes.io/projected/f13b16cf-c804-4498-be33-744ccaa1c8eb-kube-api-access-lbvwr\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.469225 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-apiservice-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.469247 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-webhook-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.570934 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvwr\" (UniqueName: \"kubernetes.io/projected/f13b16cf-c804-4498-be33-744ccaa1c8eb-kube-api-access-lbvwr\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.570991 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-apiservice-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.571012 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-webhook-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.577465 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-webhook-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.579273 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f13b16cf-c804-4498-be33-744ccaa1c8eb-apiservice-cert\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.594287 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvwr\" (UniqueName: \"kubernetes.io/projected/f13b16cf-c804-4498-be33-744ccaa1c8eb-kube-api-access-lbvwr\") pod \"watcher-operator-controller-manager-7cc98bc54-8h2jk\" (UID: \"f13b16cf-c804-4498-be33-744ccaa1c8eb\") " pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:36 crc kubenswrapper[5023]: I0219 08:17:36.659448 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:37 crc kubenswrapper[5023]: I0219 08:17:37.136873 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk"] Feb 19 08:17:37 crc kubenswrapper[5023]: I0219 08:17:37.727587 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" event={"ID":"f13b16cf-c804-4498-be33-744ccaa1c8eb","Type":"ContainerStarted","Data":"ce68280f53bfa695ae4845e9df3a40a198ba19fee7a3e44aed8a6d630407768e"} Feb 19 08:17:37 crc kubenswrapper[5023]: I0219 08:17:37.727994 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" event={"ID":"f13b16cf-c804-4498-be33-744ccaa1c8eb","Type":"ContainerStarted","Data":"46d253627f1eaa31f086c7b92d115089d5f10366ba0e198b9e9a957a00b891bf"} Feb 19 08:17:37 crc kubenswrapper[5023]: I0219 08:17:37.728047 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:37 crc kubenswrapper[5023]: I0219 08:17:37.748095 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" podStartSLOduration=1.748074795 podStartE2EDuration="1.748074795s" podCreationTimestamp="2026-02-19 08:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:17:37.744650084 +0000 UTC m=+1015.401769032" watchObservedRunningTime="2026-02-19 08:17:37.748074795 +0000 UTC m=+1015.405193763" Feb 19 08:17:41 crc kubenswrapper[5023]: I0219 08:17:41.870542 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:17:41 crc kubenswrapper[5023]: I0219 08:17:41.871232 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:17:46 crc kubenswrapper[5023]: I0219 08:17:46.665374 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cc98bc54-8h2jk" Feb 19 08:17:46 crc kubenswrapper[5023]: I0219 08:17:46.734794 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:46 crc kubenswrapper[5023]: I0219 08:17:46.734998 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" podUID="457d8e5a-68d2-4807-ada4-a63013df8594" containerName="manager" containerID="cri-o://eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42" gracePeriod=10 Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.180890 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.242369 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlcfr\" (UniqueName: \"kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr\") pod \"457d8e5a-68d2-4807-ada4-a63013df8594\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.242488 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert\") pod \"457d8e5a-68d2-4807-ada4-a63013df8594\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.242575 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert\") pod \"457d8e5a-68d2-4807-ada4-a63013df8594\" (UID: \"457d8e5a-68d2-4807-ada4-a63013df8594\") " Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.248608 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "457d8e5a-68d2-4807-ada4-a63013df8594" (UID: "457d8e5a-68d2-4807-ada4-a63013df8594"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.249279 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr" (OuterVolumeSpecName: "kube-api-access-jlcfr") pod "457d8e5a-68d2-4807-ada4-a63013df8594" (UID: "457d8e5a-68d2-4807-ada4-a63013df8594"). InnerVolumeSpecName "kube-api-access-jlcfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.250326 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "457d8e5a-68d2-4807-ada4-a63013df8594" (UID: "457d8e5a-68d2-4807-ada4-a63013df8594"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.344520 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlcfr\" (UniqueName: \"kubernetes.io/projected/457d8e5a-68d2-4807-ada4-a63013df8594-kube-api-access-jlcfr\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.344587 5023 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.344601 5023 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/457d8e5a-68d2-4807-ada4-a63013df8594-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.805843 5023 generic.go:334] "Generic (PLEG): container finished" podID="457d8e5a-68d2-4807-ada4-a63013df8594" containerID="eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42" exitCode=0 Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.805891 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" event={"ID":"457d8e5a-68d2-4807-ada4-a63013df8594","Type":"ContainerDied","Data":"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42"} Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.805923 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" event={"ID":"457d8e5a-68d2-4807-ada4-a63013df8594","Type":"ContainerDied","Data":"b80785279c7e0b9ec42e4ab7479d43b9a40abc2ba7cf585cf8ec804a4e011f67"} Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.805943 5023 scope.go:117] "RemoveContainer" containerID="eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.806673 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.848809 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.855185 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c7c8c5885-6pz4x"] Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.859810 5023 scope.go:117] "RemoveContainer" containerID="eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42" Feb 19 08:17:47 crc kubenswrapper[5023]: E0219 08:17:47.864306 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42\": container with ID starting with eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42 not found: ID does not exist" containerID="eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42" Feb 19 08:17:47 crc kubenswrapper[5023]: I0219 08:17:47.864362 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42"} err="failed to get container status \"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42\": rpc error: code = NotFound desc = could not find container \"eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42\": container with ID starting with eaae0457c2ef31de863288f68bacacf2ff365a4b7ffee5d99265737e547acd42 not found: ID does not exist" Feb 19 08:17:49 crc kubenswrapper[5023]: I0219 08:17:49.487040 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457d8e5a-68d2-4807-ada4-a63013df8594" path="/var/lib/kubelet/pods/457d8e5a-68d2-4807-ada4-a63013df8594/volumes" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.832473 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 19 08:17:58 crc kubenswrapper[5023]: E0219 08:17:58.833277 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457d8e5a-68d2-4807-ada4-a63013df8594" containerName="manager" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.833291 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="457d8e5a-68d2-4807-ada4-a63013df8594" containerName="manager" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.833450 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="457d8e5a-68d2-4807-ada4-a63013df8594" containerName="manager" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.834272 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837189 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837519 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837528 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837607 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837643 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-b8n89" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.837894 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.838048 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.838264 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.839135 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.848204 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.964045 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86f8\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-kube-api-access-s86f8\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.964667 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cec7daa-e826-419c-9c77-cfcabc90b362-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.964723 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cec7daa-e826-419c-9c77-cfcabc90b362-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.964753 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.964959 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965085 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965287 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965369 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965522 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965546 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:58 crc kubenswrapper[5023]: I0219 08:17:58.965569 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.066955 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s86f8\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-kube-api-access-s86f8\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067017 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cec7daa-e826-419c-9c77-cfcabc90b362-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067056 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cec7daa-e826-419c-9c77-cfcabc90b362-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067080 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067112 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067147 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067232 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067264 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067430 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067459 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.067506 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.071802 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.072119 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.072454 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.073124 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.073306 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cec7daa-e826-419c-9c77-cfcabc90b362-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.074578 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.074642 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea2661f2b255b67dea0cb0ca04b1159606f57d0e349f6ad4e8ce89df1a5a8df5/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.079934 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.080547 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cec7daa-e826-419c-9c77-cfcabc90b362-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.093326 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.095345 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cec7daa-e826-419c-9c77-cfcabc90b362-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.108943 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s86f8\" (UniqueName: \"kubernetes.io/projected/7cec7daa-e826-419c-9c77-cfcabc90b362-kube-api-access-s86f8\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.111135 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.117163 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123087 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123318 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123425 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123577 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123795 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.123964 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.124079 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-l2ft2" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.126422 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.139267 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40f58677-24dc-40b0-bf5c-3359228bc38c\") pod \"rabbitmq-server-0\" (UID: \"7cec7daa-e826-419c-9c77-cfcabc90b362\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.161444 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270739 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcx48\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-kube-api-access-mcx48\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270787 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270877 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270915 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270949 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270970 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.270992 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ecf2c85d-9255-40bd-ac78-4165403c1754-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.271064 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.271085 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.271102 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.271124 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ecf2c85d-9255-40bd-ac78-4165403c1754-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377326 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377746 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ecf2c85d-9255-40bd-ac78-4165403c1754-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377799 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcx48\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-kube-api-access-mcx48\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377838 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377902 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377929 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377964 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.377994 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.378016 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ecf2c85d-9255-40bd-ac78-4165403c1754-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.378137 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.378163 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.379537 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.381960 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.382254 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.383302 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.389838 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ecf2c85d-9255-40bd-ac78-4165403c1754-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.395885 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.396011 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1fac1d5e55db5394a0512afa96f16f96e236d6739d7b039138a4beec920f1d79/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.400699 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ecf2c85d-9255-40bd-ac78-4165403c1754-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.406247 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcx48\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-kube-api-access-mcx48\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.410405 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.417884 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ecf2c85d-9255-40bd-ac78-4165403c1754-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.442596 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ecf2c85d-9255-40bd-ac78-4165403c1754-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.466937 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4127ad3c-54a6-419d-aae9-7a97bfc6b1fd\") pod \"rabbitmq-notifications-server-0\" (UID: \"ecf2c85d-9255-40bd-ac78-4165403c1754\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.544061 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.718266 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 19 08:17:59 crc kubenswrapper[5023]: I0219 08:17:59.896482 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7cec7daa-e826-419c-9c77-cfcabc90b362","Type":"ContainerStarted","Data":"e7b5f57cd317113a406a95e6df55c6e99e830dfd3205a024989c139551555633"} Feb 19 08:18:00 crc kubenswrapper[5023]: W0219 08:18:00.030878 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf2c85d_9255_40bd_ac78_4165403c1754.slice/crio-0bc44585a07009a5c663e24f369e064cdf2f9f01b53fd81b202685e59136cc25 WatchSource:0}: Error finding container 0bc44585a07009a5c663e24f369e064cdf2f9f01b53fd81b202685e59136cc25: Status 404 returned error can't find the container with id 0bc44585a07009a5c663e24f369e064cdf2f9f01b53fd81b202685e59136cc25 Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.041142 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.477909 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.481909 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.532518 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.532773 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.534961 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.535221 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-zr6zf" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.536684 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.544877 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.625823 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.625912 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.625962 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-generated\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.625998 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-operator-scripts\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.626034 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-kolla-config\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.626064 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78mx\" (UniqueName: \"kubernetes.io/projected/36b7f388-e73a-4206-bc50-93365c2e8515-kube-api-access-t78mx\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.626100 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.626254 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-default\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.717528 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.719070 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.723976 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.724171 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.724356 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-xp5s8" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727379 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-default\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727482 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727518 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727543 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-generated\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727571 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-operator-scripts\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727597 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-kolla-config\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727647 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78mx\" (UniqueName: \"kubernetes.io/projected/36b7f388-e73a-4206-bc50-93365c2e8515-kube-api-access-t78mx\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.727677 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.728092 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-generated\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.728944 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.728975 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-config-data-default\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.729959 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-kolla-config\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.730162 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b7f388-e73a-4206-bc50-93365c2e8515-operator-scripts\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.741937 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.742336 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/54b71e374af5bbd265c374d6a4106f50229ce6f83ceedaeef4e7249bb3c9bcfc/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.741964 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.742861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/36b7f388-e73a-4206-bc50-93365c2e8515-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.749900 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78mx\" (UniqueName: \"kubernetes.io/projected/36b7f388-e73a-4206-bc50-93365c2e8515-kube-api-access-t78mx\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.826385 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a520a2a5-2854-4b4f-89a2-b96b8e966519\") pod \"openstack-galera-0\" (UID: \"36b7f388-e73a-4206-bc50-93365c2e8515\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.829979 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.830056 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.830080 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.830103 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4d9\" (UniqueName: \"kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.830125 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.861357 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.907463 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"ecf2c85d-9255-40bd-ac78-4165403c1754","Type":"ContainerStarted","Data":"0bc44585a07009a5c663e24f369e064cdf2f9f01b53fd81b202685e59136cc25"} Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.932278 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.932374 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.932426 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.932442 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4d9\" (UniqueName: \"kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.932489 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.933574 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.933907 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.940136 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.943205 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:00 crc kubenswrapper[5023]: I0219 08:18:00.955707 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4d9\" (UniqueName: \"kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9\") pod \"memcached-0\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.094902 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.166000 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.167171 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.179536 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-w64wn" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.192215 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.344422 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvnv\" (UniqueName: \"kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv\") pod \"kube-state-metrics-0\" (UID: \"bf5cf887-738e-45c4-92c8-957b9b434877\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.446748 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvnv\" (UniqueName: \"kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv\") pod \"kube-state-metrics-0\" (UID: \"bf5cf887-738e-45c4-92c8-957b9b434877\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.472228 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvnv\" (UniqueName: \"kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv\") pod \"kube-state-metrics-0\" (UID: \"bf5cf887-738e-45c4-92c8-957b9b434877\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.509766 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.515979 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:01 crc kubenswrapper[5023]: W0219 08:18:01.542944 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b7f388_e73a_4206_bc50_93365c2e8515.slice/crio-e6dcab0161cfbfd0e52fa1696370f688751ad45b79eb4aaf5b2cd8d0e5b17cab WatchSource:0}: Error finding container e6dcab0161cfbfd0e52fa1696370f688751ad45b79eb4aaf5b2cd8d0e5b17cab: Status 404 returned error can't find the container with id e6dcab0161cfbfd0e52fa1696370f688751ad45b79eb4aaf5b2cd8d0e5b17cab Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.883666 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.900519 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.902140 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.904853 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.905598 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.905744 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.905849 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.905873 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-tpj4t" Feb 19 08:18:01 crc kubenswrapper[5023]: W0219 08:18:01.926758 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948974f6_c39b_4658_a16c_9d76e6517e3f.slice/crio-e4f18f2e53eaf8602801a07059fe2b9d56aec2d80d80eb9408ba03f0d8b15605 WatchSource:0}: Error finding container e4f18f2e53eaf8602801a07059fe2b9d56aec2d80d80eb9408ba03f0d8b15605: Status 404 returned error can't find the container with id e4f18f2e53eaf8602801a07059fe2b9d56aec2d80d80eb9408ba03f0d8b15605 Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.953553 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 19 08:18:01 crc kubenswrapper[5023]: I0219 08:18:01.953693 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"36b7f388-e73a-4206-bc50-93365c2e8515","Type":"ContainerStarted","Data":"e6dcab0161cfbfd0e52fa1696370f688751ad45b79eb4aaf5b2cd8d0e5b17cab"} Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.071882 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.071969 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.072025 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.072056 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.072077 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.072093 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.072137 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqps4\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-kube-api-access-gqps4\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.173604 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqps4\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-kube-api-access-gqps4\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.173725 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.173789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.174179 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.174244 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.174286 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.174308 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.175009 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.181963 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.183360 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/834506b4-7dc5-4648-8e9f-abdbc041753a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.183648 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.185110 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.189992 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/834506b4-7dc5-4648-8e9f-abdbc041753a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.199970 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqps4\" (UniqueName: \"kubernetes.io/projected/834506b4-7dc5-4648-8e9f-abdbc041753a-kube-api-access-gqps4\") pod \"alertmanager-metric-storage-0\" (UID: \"834506b4-7dc5-4648-8e9f-abdbc041753a\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.252941 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.360118 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:18:02 crc kubenswrapper[5023]: W0219 08:18:02.379243 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf5cf887_738e_45c4_92c8_957b9b434877.slice/crio-e261f0506c24acb1c31d1894aac117c1b41d64e86230e2792356cb0afe7b3867 WatchSource:0}: Error finding container e261f0506c24acb1c31d1894aac117c1b41d64e86230e2792356cb0afe7b3867: Status 404 returned error can't find the container with id e261f0506c24acb1c31d1894aac117c1b41d64e86230e2792356cb0afe7b3867 Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.503670 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.505399 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.506447 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.507919 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513145 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513367 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513470 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513563 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513741 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513869 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.513976 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-vk89l" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.514075 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.514176 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-9bhx5" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.514476 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.547546 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.583868 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587289 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587354 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587405 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587448 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587512 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587545 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587596 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587641 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzqz2\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587675 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587716 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgdc\" (UniqueName: \"kubernetes.io/projected/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-kube-api-access-vsgdc\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587735 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.587754 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.689651 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.689997 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690039 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690061 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690088 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690103 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzqz2\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690123 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690150 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgdc\" (UniqueName: \"kubernetes.io/projected/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-kube-api-access-vsgdc\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690166 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690184 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690215 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.690238 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.694599 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.695047 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.700481 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.708860 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.709006 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.709047 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/494f88fb5905b3e8764af00928d6d9f8500eaf956069ab2ba98bfdb911d2e8b7/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.719807 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.722718 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.726955 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.727694 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.733386 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzqz2\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.733400 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.741414 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgdc\" (UniqueName: \"kubernetes.io/projected/817dfdb3-899e-49c9-9a8b-73f8c3e80c52-kube-api-access-vsgdc\") pod \"observability-ui-dashboards-66cbf594b5-ztvtc\" (UID: \"817dfdb3-899e-49c9-9a8b-73f8c3e80c52\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.835423 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.849204 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.887318 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.920631 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59f5b9cc9c-bk5jg"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.929180 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.955701 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59f5b9cc9c-bk5jg"] Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.991842 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"bf5cf887-738e-45c4-92c8-957b9b434877","Type":"ContainerStarted","Data":"e261f0506c24acb1c31d1894aac117c1b41d64e86230e2792356cb0afe7b3867"} Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994226 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994298 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-oauth-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994336 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-oauth-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994376 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-service-ca\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994555 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lh7\" (UniqueName: \"kubernetes.io/projected/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-kube-api-access-m6lh7\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994691 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.994906 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-trusted-ca-bundle\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:02 crc kubenswrapper[5023]: I0219 08:18:02.996409 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"948974f6-c39b-4658-a16c-9d76e6517e3f","Type":"ContainerStarted","Data":"e4f18f2e53eaf8602801a07059fe2b9d56aec2d80d80eb9408ba03f0d8b15605"} Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.035136 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097206 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-service-ca\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097293 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lh7\" (UniqueName: \"kubernetes.io/projected/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-kube-api-access-m6lh7\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097340 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097395 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-trusted-ca-bundle\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097431 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097463 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-oauth-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.097490 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-oauth-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.099467 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-oauth-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.100126 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-service-ca\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.100435 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-trusted-ca-bundle\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.100907 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.112769 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-serving-cert\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.129246 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lh7\" (UniqueName: \"kubernetes.io/projected/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-kube-api-access-m6lh7\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.130547 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0bd2a6bf-392a-4adf-804e-a6fe9bdeba71-console-oauth-config\") pod \"console-59f5b9cc9c-bk5jg\" (UID: \"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71\") " pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.275041 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.656916 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:03 crc kubenswrapper[5023]: W0219 08:18:03.711944 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9adb67ae_4bdf_4348_b37c_4fbf44d95acc.slice/crio-704cd49a6d013d08ba79819daf04c9dc57d2a4e4295e636c7be0c1fb9e7c36e4 WatchSource:0}: Error finding container 704cd49a6d013d08ba79819daf04c9dc57d2a4e4295e636c7be0c1fb9e7c36e4: Status 404 returned error can't find the container with id 704cd49a6d013d08ba79819daf04c9dc57d2a4e4295e636c7be0c1fb9e7c36e4 Feb 19 08:18:03 crc kubenswrapper[5023]: I0219 08:18:03.754525 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc"] Feb 19 08:18:04 crc kubenswrapper[5023]: I0219 08:18:04.005575 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"834506b4-7dc5-4648-8e9f-abdbc041753a","Type":"ContainerStarted","Data":"c1e1091eec70b118af79a34b1b8423e6f1dce591d8070c98f783c917e1e9a304"} Feb 19 08:18:04 crc kubenswrapper[5023]: W0219 08:18:04.006169 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod817dfdb3_899e_49c9_9a8b_73f8c3e80c52.slice/crio-742095f692d200b6a4dd2e8c0ebc9d9fde27c945e8eaebdb918fb715b9b0a754 WatchSource:0}: Error finding container 742095f692d200b6a4dd2e8c0ebc9d9fde27c945e8eaebdb918fb715b9b0a754: Status 404 returned error can't find the container with id 742095f692d200b6a4dd2e8c0ebc9d9fde27c945e8eaebdb918fb715b9b0a754 Feb 19 08:18:04 crc kubenswrapper[5023]: I0219 08:18:04.007221 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerStarted","Data":"704cd49a6d013d08ba79819daf04c9dc57d2a4e4295e636c7be0c1fb9e7c36e4"} Feb 19 08:18:04 crc kubenswrapper[5023]: I0219 08:18:04.284580 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59f5b9cc9c-bk5jg"] Feb 19 08:18:04 crc kubenswrapper[5023]: W0219 08:18:04.735235 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd2a6bf_392a_4adf_804e_a6fe9bdeba71.slice/crio-2ed08700ea9dff2674b0830ce1037326f3ff33ef49e4dbc5ea8d9742ec2e486e WatchSource:0}: Error finding container 2ed08700ea9dff2674b0830ce1037326f3ff33ef49e4dbc5ea8d9742ec2e486e: Status 404 returned error can't find the container with id 2ed08700ea9dff2674b0830ce1037326f3ff33ef49e4dbc5ea8d9742ec2e486e Feb 19 08:18:05 crc kubenswrapper[5023]: I0219 08:18:05.028520 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" event={"ID":"817dfdb3-899e-49c9-9a8b-73f8c3e80c52","Type":"ContainerStarted","Data":"742095f692d200b6a4dd2e8c0ebc9d9fde27c945e8eaebdb918fb715b9b0a754"} Feb 19 08:18:05 crc kubenswrapper[5023]: I0219 08:18:05.030640 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59f5b9cc9c-bk5jg" event={"ID":"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71","Type":"ContainerStarted","Data":"2ed08700ea9dff2674b0830ce1037326f3ff33ef49e4dbc5ea8d9742ec2e486e"} Feb 19 08:18:11 crc kubenswrapper[5023]: I0219 08:18:11.870845 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:18:11 crc kubenswrapper[5023]: I0219 08:18:11.871357 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:18:11 crc kubenswrapper[5023]: I0219 08:18:11.871409 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:18:11 crc kubenswrapper[5023]: I0219 08:18:11.872070 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:18:11 crc kubenswrapper[5023]: I0219 08:18:11.872124 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f" gracePeriod=600 Feb 19 08:18:12 crc kubenswrapper[5023]: I0219 08:18:12.121263 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f" exitCode=0 Feb 19 08:18:12 crc kubenswrapper[5023]: I0219 08:18:12.121343 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f"} Feb 19 08:18:12 crc kubenswrapper[5023]: I0219 08:18:12.121413 5023 scope.go:117] "RemoveContainer" containerID="c9107fa6c65c5bdaadd0e295cacd61be82459a4c5b244fe42220dcb2855d3001" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.412945 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.413673 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t78mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_watcher-kuttl-default(36b7f388-e73a-4206-bc50-93365c2e8515): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.414821 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="36b7f388-e73a-4206-bc50-93365c2e8515" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.425350 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.425476 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mcx48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(ecf2c85d-9255-40bd-ac78-4165403c1754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.427291 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="ecf2c85d-9255-40bd-ac78-4165403c1754" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.959239 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.959760 5023 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.959897 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=watcher-kuttl-default],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7cvnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_watcher-kuttl-default(bf5cf887-738e-45c4-92c8-957b9b434877): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 08:18:17 crc kubenswrapper[5023]: E0219 08:18:17.961087 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" Feb 19 08:18:18 crc kubenswrapper[5023]: I0219 08:18:18.204130 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59f5b9cc9c-bk5jg" event={"ID":"0bd2a6bf-392a-4adf-804e-a6fe9bdeba71","Type":"ContainerStarted","Data":"a50c419a81703de8c93e19c4879797cc5378b97f226b358241dd3d5b6dc1d368"} Feb 19 08:18:18 crc kubenswrapper[5023]: E0219 08:18:18.205533 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="36b7f388-e73a-4206-bc50-93365c2e8515" Feb 19 08:18:18 crc kubenswrapper[5023]: E0219 08:18:18.209809 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" Feb 19 08:18:18 crc kubenswrapper[5023]: I0219 08:18:18.278181 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59f5b9cc9c-bk5jg" podStartSLOduration=16.278155582 podStartE2EDuration="16.278155582s" podCreationTimestamp="2026-02-19 08:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:18:18.271391673 +0000 UTC m=+1055.928510621" watchObservedRunningTime="2026-02-19 08:18:18.278155582 +0000 UTC m=+1055.935274530" Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.228148 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" event={"ID":"817dfdb3-899e-49c9-9a8b-73f8c3e80c52","Type":"ContainerStarted","Data":"9ba90155033c560239f5368d7216eb4c284eedc4c321a3cb38fe0389cb0e6219"} Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.231565 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf"} Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.233441 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"948974f6-c39b-4658-a16c-9d76e6517e3f","Type":"ContainerStarted","Data":"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8"} Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.233472 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.246407 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ztvtc" podStartSLOduration=3.2953809720000002 podStartE2EDuration="17.246379312s" podCreationTimestamp="2026-02-19 08:18:02 +0000 UTC" firstStartedPulling="2026-02-19 08:18:04.016964357 +0000 UTC m=+1041.674083305" lastFinishedPulling="2026-02-19 08:18:17.967962697 +0000 UTC m=+1055.625081645" observedRunningTime="2026-02-19 08:18:19.242334655 +0000 UTC m=+1056.899453603" watchObservedRunningTime="2026-02-19 08:18:19.246379312 +0000 UTC m=+1056.903498350" Feb 19 08:18:19 crc kubenswrapper[5023]: I0219 08:18:19.275601 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=3.25299736 podStartE2EDuration="19.275577276s" podCreationTimestamp="2026-02-19 08:18:00 +0000 UTC" firstStartedPulling="2026-02-19 08:18:01.953566948 +0000 UTC m=+1039.610685896" lastFinishedPulling="2026-02-19 08:18:17.976146824 +0000 UTC m=+1055.633265812" observedRunningTime="2026-02-19 08:18:19.265580791 +0000 UTC m=+1056.922699749" watchObservedRunningTime="2026-02-19 08:18:19.275577276 +0000 UTC m=+1056.932696224" Feb 19 08:18:20 crc kubenswrapper[5023]: I0219 08:18:20.241926 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7cec7daa-e826-419c-9c77-cfcabc90b362","Type":"ContainerStarted","Data":"44225bbde97b62b01ab4806cd1a7433777c675b0b355400e39966334814c2d15"} Feb 19 08:18:20 crc kubenswrapper[5023]: I0219 08:18:20.244424 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"ecf2c85d-9255-40bd-ac78-4165403c1754","Type":"ContainerStarted","Data":"94609aaa381ec253cd37498c06fd05c6225b662f0aa4b84ea92fae80eb692fd8"} Feb 19 08:18:21 crc kubenswrapper[5023]: I0219 08:18:21.252166 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"834506b4-7dc5-4648-8e9f-abdbc041753a","Type":"ContainerStarted","Data":"4af1f84e09d3da22b29a9216ca7cc39aaf9987c03d1f298a25b88cabf7c9b3e1"} Feb 19 08:18:21 crc kubenswrapper[5023]: I0219 08:18:21.253320 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerStarted","Data":"cddcf4d9ad13a894991269b28c5c3f8ef9f828b1a30756092a1db803d19246af"} Feb 19 08:18:23 crc kubenswrapper[5023]: I0219 08:18:23.276095 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:23 crc kubenswrapper[5023]: I0219 08:18:23.276777 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:23 crc kubenswrapper[5023]: I0219 08:18:23.281758 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:24 crc kubenswrapper[5023]: I0219 08:18:24.275909 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59f5b9cc9c-bk5jg" Feb 19 08:18:24 crc kubenswrapper[5023]: I0219 08:18:24.333091 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:18:26 crc kubenswrapper[5023]: I0219 08:18:26.096987 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Feb 19 08:18:27 crc kubenswrapper[5023]: I0219 08:18:27.297442 5023 generic.go:334] "Generic (PLEG): container finished" podID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerID="cddcf4d9ad13a894991269b28c5c3f8ef9f828b1a30756092a1db803d19246af" exitCode=0 Feb 19 08:18:27 crc kubenswrapper[5023]: I0219 08:18:27.297488 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerDied","Data":"cddcf4d9ad13a894991269b28c5c3f8ef9f828b1a30756092a1db803d19246af"} Feb 19 08:18:28 crc kubenswrapper[5023]: I0219 08:18:28.310451 5023 generic.go:334] "Generic (PLEG): container finished" podID="834506b4-7dc5-4648-8e9f-abdbc041753a" containerID="4af1f84e09d3da22b29a9216ca7cc39aaf9987c03d1f298a25b88cabf7c9b3e1" exitCode=0 Feb 19 08:18:28 crc kubenswrapper[5023]: I0219 08:18:28.310546 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"834506b4-7dc5-4648-8e9f-abdbc041753a","Type":"ContainerDied","Data":"4af1f84e09d3da22b29a9216ca7cc39aaf9987c03d1f298a25b88cabf7c9b3e1"} Feb 19 08:18:30 crc kubenswrapper[5023]: I0219 08:18:30.325358 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"bf5cf887-738e-45c4-92c8-957b9b434877","Type":"ContainerStarted","Data":"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb"} Feb 19 08:18:30 crc kubenswrapper[5023]: I0219 08:18:30.326124 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:30 crc kubenswrapper[5023]: I0219 08:18:30.349846 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=1.8375719560000001 podStartE2EDuration="29.349814824s" podCreationTimestamp="2026-02-19 08:18:01 +0000 UTC" firstStartedPulling="2026-02-19 08:18:02.403862097 +0000 UTC m=+1040.060981045" lastFinishedPulling="2026-02-19 08:18:29.916104965 +0000 UTC m=+1067.573223913" observedRunningTime="2026-02-19 08:18:30.340195119 +0000 UTC m=+1067.997314067" watchObservedRunningTime="2026-02-19 08:18:30.349814824 +0000 UTC m=+1068.006933813" Feb 19 08:18:32 crc kubenswrapper[5023]: I0219 08:18:32.356997 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerStarted","Data":"cf084ecde820311e9a38fad4f87afaf7f54a15579d50ff6c8ea1af17512c5090"} Feb 19 08:18:32 crc kubenswrapper[5023]: I0219 08:18:32.372459 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"36b7f388-e73a-4206-bc50-93365c2e8515","Type":"ContainerStarted","Data":"2606d633c03343f93cc2d5edc7016c9273b13a756215f54896459a135c839d80"} Feb 19 08:18:39 crc kubenswrapper[5023]: I0219 08:18:39.433602 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerStarted","Data":"aca6d9530789c4c44cfea8114e8692d6bd72a24b9142d4a26622b7c065e3121b"} Feb 19 08:18:39 crc kubenswrapper[5023]: I0219 08:18:39.439023 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"834506b4-7dc5-4648-8e9f-abdbc041753a","Type":"ContainerStarted","Data":"dc58a3a72141b3ad34d7e43ae9a61851c3739148e46f2639d1638d62fabfca90"} Feb 19 08:18:41 crc kubenswrapper[5023]: I0219 08:18:41.460075 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"834506b4-7dc5-4648-8e9f-abdbc041753a","Type":"ContainerStarted","Data":"3aad13d61a25e957d115788f868b65914ab3cb3af8b022747f41e8c598544c78"} Feb 19 08:18:41 crc kubenswrapper[5023]: I0219 08:18:41.460579 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:41 crc kubenswrapper[5023]: I0219 08:18:41.463298 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 19 08:18:41 crc kubenswrapper[5023]: I0219 08:18:41.494302 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=5.093535514 podStartE2EDuration="40.494283174s" podCreationTimestamp="2026-02-19 08:18:01 +0000 UTC" firstStartedPulling="2026-02-19 08:18:03.169376614 +0000 UTC m=+1040.826495552" lastFinishedPulling="2026-02-19 08:18:38.570124264 +0000 UTC m=+1076.227243212" observedRunningTime="2026-02-19 08:18:41.490410401 +0000 UTC m=+1079.147529369" watchObservedRunningTime="2026-02-19 08:18:41.494283174 +0000 UTC m=+1079.151402122" Feb 19 08:18:41 crc kubenswrapper[5023]: I0219 08:18:41.523210 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:18:42 crc kubenswrapper[5023]: I0219 08:18:42.471343 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerStarted","Data":"8f1eeaaa544cd97a09bdb05ba1b265b6b0c54e24dbaebf8508e5500e4026f35a"} Feb 19 08:18:42 crc kubenswrapper[5023]: I0219 08:18:42.474034 5023 generic.go:334] "Generic (PLEG): container finished" podID="36b7f388-e73a-4206-bc50-93365c2e8515" containerID="2606d633c03343f93cc2d5edc7016c9273b13a756215f54896459a135c839d80" exitCode=0 Feb 19 08:18:42 crc kubenswrapper[5023]: I0219 08:18:42.474932 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"36b7f388-e73a-4206-bc50-93365c2e8515","Type":"ContainerDied","Data":"2606d633c03343f93cc2d5edc7016c9273b13a756215f54896459a135c839d80"} Feb 19 08:18:42 crc kubenswrapper[5023]: I0219 08:18:42.508541 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=3.437474596 podStartE2EDuration="41.508521285s" podCreationTimestamp="2026-02-19 08:18:01 +0000 UTC" firstStartedPulling="2026-02-19 08:18:03.715053272 +0000 UTC m=+1041.372172220" lastFinishedPulling="2026-02-19 08:18:41.786099961 +0000 UTC m=+1079.443218909" observedRunningTime="2026-02-19 08:18:42.503012659 +0000 UTC m=+1080.160131607" watchObservedRunningTime="2026-02-19 08:18:42.508521285 +0000 UTC m=+1080.165640233" Feb 19 08:18:42 crc kubenswrapper[5023]: I0219 08:18:42.849561 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:43 crc kubenswrapper[5023]: I0219 08:18:43.501390 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"36b7f388-e73a-4206-bc50-93365c2e8515","Type":"ContainerStarted","Data":"4254250cf3e5f92bc3d57b5b22413f8a579009da4743deab91f547be75e6b7c7"} Feb 19 08:18:43 crc kubenswrapper[5023]: I0219 08:18:43.527833 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=14.07132338 podStartE2EDuration="44.527814681s" podCreationTimestamp="2026-02-19 08:17:59 +0000 UTC" firstStartedPulling="2026-02-19 08:18:01.60965112 +0000 UTC m=+1039.266770058" lastFinishedPulling="2026-02-19 08:18:32.066142411 +0000 UTC m=+1069.723261359" observedRunningTime="2026-02-19 08:18:43.523938218 +0000 UTC m=+1081.181057166" watchObservedRunningTime="2026-02-19 08:18:43.527814681 +0000 UTC m=+1081.184933629" Feb 19 08:18:47 crc kubenswrapper[5023]: I0219 08:18:47.849490 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:47 crc kubenswrapper[5023]: I0219 08:18:47.852322 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:48 crc kubenswrapper[5023]: I0219 08:18:48.537950 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.383478 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-86c9d74687-pstmq" podUID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" containerName="console" containerID="cri-o://f9c5afa5644ea6716024114d2753b4be08c0c2e2874e3a8d7cc924d3d1dd316b" gracePeriod=15 Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.545194 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86c9d74687-pstmq_c687f8ed-9bea-45d7-b892-cc20b0d8ca2e/console/0.log" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.545474 5023 generic.go:334] "Generic (PLEG): container finished" podID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" containerID="f9c5afa5644ea6716024114d2753b4be08c0c2e2874e3a8d7cc924d3d1dd316b" exitCode=2 Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.545576 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c9d74687-pstmq" event={"ID":"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e","Type":"ContainerDied","Data":"f9c5afa5644ea6716024114d2753b4be08c0c2e2874e3a8d7cc924d3d1dd316b"} Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.847943 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86c9d74687-pstmq_c687f8ed-9bea-45d7-b892-cc20b0d8ca2e/console/0.log" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.848013 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.882910 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883000 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883049 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5trg\" (UniqueName: \"kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883073 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883095 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883190 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.883210 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config\") pod \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\" (UID: \"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e\") " Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.932477 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.932867 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.936019 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config" (OuterVolumeSpecName: "console-config") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.936354 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca" (OuterVolumeSpecName: "service-ca") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.939705 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.940870 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.941159 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg" (OuterVolumeSpecName: "kube-api-access-x5trg") pod "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" (UID: "c687f8ed-9bea-45d7-b892-cc20b0d8ca2e"). InnerVolumeSpecName "kube-api-access-x5trg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985096 5023 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985128 5023 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985138 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5trg\" (UniqueName: \"kubernetes.io/projected/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-kube-api-access-x5trg\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985149 5023 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985159 5023 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985168 5023 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:49 crc kubenswrapper[5023]: I0219 08:18:49.985177 5023 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.554911 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86c9d74687-pstmq_c687f8ed-9bea-45d7-b892-cc20b0d8ca2e/console/0.log" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.554960 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86c9d74687-pstmq" event={"ID":"c687f8ed-9bea-45d7-b892-cc20b0d8ca2e","Type":"ContainerDied","Data":"48de7cd2e880cf4a008a95030124bae0e688abac351193b88ae317a7adf718e5"} Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.554995 5023 scope.go:117] "RemoveContainer" containerID="f9c5afa5644ea6716024114d2753b4be08c0c2e2874e3a8d7cc924d3d1dd316b" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.555019 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86c9d74687-pstmq" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.584612 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.592288 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-86c9d74687-pstmq"] Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.673233 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.673473 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="prometheus" containerID="cri-o://cf084ecde820311e9a38fad4f87afaf7f54a15579d50ff6c8ea1af17512c5090" gracePeriod=600 Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.673858 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="thanos-sidecar" containerID="cri-o://8f1eeaaa544cd97a09bdb05ba1b265b6b0c54e24dbaebf8508e5500e4026f35a" gracePeriod=600 Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.673905 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="config-reloader" containerID="cri-o://aca6d9530789c4c44cfea8114e8692d6bd72a24b9142d4a26622b7c065e3121b" gracePeriod=600 Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.861997 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.862150 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:50 crc kubenswrapper[5023]: I0219 08:18:50.962852 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.489528 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" path="/var/lib/kubelet/pods/c687f8ed-9bea-45d7-b892-cc20b0d8ca2e/volumes" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.570514 5023 generic.go:334] "Generic (PLEG): container finished" podID="7cec7daa-e826-419c-9c77-cfcabc90b362" containerID="44225bbde97b62b01ab4806cd1a7433777c675b0b355400e39966334814c2d15" exitCode=0 Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.570598 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7cec7daa-e826-419c-9c77-cfcabc90b362","Type":"ContainerDied","Data":"44225bbde97b62b01ab4806cd1a7433777c675b0b355400e39966334814c2d15"} Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.578910 5023 generic.go:334] "Generic (PLEG): container finished" podID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerID="8f1eeaaa544cd97a09bdb05ba1b265b6b0c54e24dbaebf8508e5500e4026f35a" exitCode=0 Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.578960 5023 generic.go:334] "Generic (PLEG): container finished" podID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerID="aca6d9530789c4c44cfea8114e8692d6bd72a24b9142d4a26622b7c065e3121b" exitCode=0 Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.578971 5023 generic.go:334] "Generic (PLEG): container finished" podID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerID="cf084ecde820311e9a38fad4f87afaf7f54a15579d50ff6c8ea1af17512c5090" exitCode=0 Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.579034 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerDied","Data":"8f1eeaaa544cd97a09bdb05ba1b265b6b0c54e24dbaebf8508e5500e4026f35a"} Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.579073 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerDied","Data":"aca6d9530789c4c44cfea8114e8692d6bd72a24b9142d4a26622b7c065e3121b"} Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.579089 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerDied","Data":"cf084ecde820311e9a38fad4f87afaf7f54a15579d50ff6c8ea1af17512c5090"} Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.689891 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.694788 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.839632 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.839911 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzqz2\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.839978 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840019 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840052 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840081 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840115 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840165 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840309 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.840347 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0\") pod \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\" (UID: \"9adb67ae-4bdf-4348-b37c-4fbf44d95acc\") " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.841133 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.841340 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.842307 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.846429 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.848475 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out" (OuterVolumeSpecName: "config-out") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.854063 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config" (OuterVolumeSpecName: "config") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.855172 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2" (OuterVolumeSpecName: "kube-api-access-mzqz2") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "kube-api-access-mzqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.858843 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.864072 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.892728 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config" (OuterVolumeSpecName: "web-config") pod "9adb67ae-4bdf-4348-b37c-4fbf44d95acc" (UID: "9adb67ae-4bdf-4348-b37c-4fbf44d95acc"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941682 5023 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941730 5023 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941745 5023 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941757 5023 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941769 5023 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941813 5023 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") on node \"crc\" " Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941826 5023 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941850 5023 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941862 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzqz2\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-kube-api-access-mzqz2\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.941873 5023 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adb67ae-4bdf-4348-b37c-4fbf44d95acc-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.958078 5023 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 08:18:51 crc kubenswrapper[5023]: I0219 08:18:51.958278 5023 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7") on node "crc" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.043803 5023 reconciler_common.go:293] "Volume detached for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") on node \"crc\" DevicePath \"\"" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.588829 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.588825 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"9adb67ae-4bdf-4348-b37c-4fbf44d95acc","Type":"ContainerDied","Data":"704cd49a6d013d08ba79819daf04c9dc57d2a4e4295e636c7be0c1fb9e7c36e4"} Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.588974 5023 scope.go:117] "RemoveContainer" containerID="8f1eeaaa544cd97a09bdb05ba1b265b6b0c54e24dbaebf8508e5500e4026f35a" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.590997 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"7cec7daa-e826-419c-9c77-cfcabc90b362","Type":"ContainerStarted","Data":"f65c89743809c8d7670b7123861a74a6c89a6b2d84ba0944ac7fa5cbd0155e81"} Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.591387 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.594088 5023 generic.go:334] "Generic (PLEG): container finished" podID="ecf2c85d-9255-40bd-ac78-4165403c1754" containerID="94609aaa381ec253cd37498c06fd05c6225b662f0aa4b84ea92fae80eb692fd8" exitCode=0 Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.594127 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"ecf2c85d-9255-40bd-ac78-4165403c1754","Type":"ContainerDied","Data":"94609aaa381ec253cd37498c06fd05c6225b662f0aa4b84ea92fae80eb692fd8"} Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.611141 5023 scope.go:117] "RemoveContainer" containerID="aca6d9530789c4c44cfea8114e8692d6bd72a24b9142d4a26622b7c065e3121b" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.625347 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.368439242 podStartE2EDuration="55.625331328s" podCreationTimestamp="2026-02-19 08:17:57 +0000 UTC" firstStartedPulling="2026-02-19 08:17:59.72684761 +0000 UTC m=+1037.383966558" lastFinishedPulling="2026-02-19 08:18:17.983739696 +0000 UTC m=+1055.640858644" observedRunningTime="2026-02-19 08:18:52.621563798 +0000 UTC m=+1090.278682766" watchObservedRunningTime="2026-02-19 08:18:52.625331328 +0000 UTC m=+1090.282450276" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.640046 5023 scope.go:117] "RemoveContainer" containerID="cf084ecde820311e9a38fad4f87afaf7f54a15579d50ff6c8ea1af17512c5090" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.673782 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.674551 5023 scope.go:117] "RemoveContainer" containerID="cddcf4d9ad13a894991269b28c5c3f8ef9f828b1a30756092a1db803d19246af" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.682015 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.712592 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:52 crc kubenswrapper[5023]: E0219 08:18:52.712917 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="config-reloader" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.712933 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="config-reloader" Feb 19 08:18:52 crc kubenswrapper[5023]: E0219 08:18:52.712963 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" containerName="console" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.712972 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" containerName="console" Feb 19 08:18:52 crc kubenswrapper[5023]: E0219 08:18:52.712990 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="thanos-sidecar" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.712996 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="thanos-sidecar" Feb 19 08:18:52 crc kubenswrapper[5023]: E0219 08:18:52.713013 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="init-config-reloader" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713019 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="init-config-reloader" Feb 19 08:18:52 crc kubenswrapper[5023]: E0219 08:18:52.713040 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="prometheus" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713047 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="prometheus" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713197 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="prometheus" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713210 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c687f8ed-9bea-45d7-b892-cc20b0d8ca2e" containerName="console" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713218 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="thanos-sidecar" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.713231 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" containerName="config-reloader" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.714833 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.716955 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.717023 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-vk89l" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.717141 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.717948 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.718876 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.719129 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.719134 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.719516 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.732415 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.746151 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862384 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862435 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862496 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862515 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862540 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862567 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862584 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862609 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862637 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862671 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862697 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862720 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.862747 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pbz\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-kube-api-access-p2pbz\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964769 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964813 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964841 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964865 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964891 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964918 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964933 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964963 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.964989 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.965009 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.965030 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pbz\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-kube-api-access-p2pbz\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.965059 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.965077 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.965762 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.966250 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.969055 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.970531 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.974931 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.976692 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.976978 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.977579 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.983331 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.985116 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:52 crc kubenswrapper[5023]: I0219 08:18:52.985844 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.009592 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pbz\" (UniqueName: \"kubernetes.io/projected/7b0233d3-76a4-4e22-b584-b5ccdc1d82cc-kube-api-access-p2pbz\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.019158 5023 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.019202 5023 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/494f88fb5905b3e8764af00928d6d9f8500eaf956069ab2ba98bfdb911d2e8b7/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.142282 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-803d9fef-ecec-474e-9d8f-c024aa3b0ca7\") pod \"prometheus-metric-storage-0\" (UID: \"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.349708 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.490965 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adb67ae-4bdf-4348-b37c-4fbf44d95acc" path="/var/lib/kubelet/pods/9adb67ae-4bdf-4348-b37c-4fbf44d95acc/volumes" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.602210 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"ecf2c85d-9255-40bd-ac78-4165403c1754","Type":"ContainerStarted","Data":"a1fb176f8112c4a8935c0d6f7beee3c2a6ff93e251cab1ac3add2a6c8e8571e1"} Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.603531 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.628189 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371981.226608 podStartE2EDuration="55.628168287s" podCreationTimestamp="2026-02-19 08:17:58 +0000 UTC" firstStartedPulling="2026-02-19 08:18:00.033812619 +0000 UTC m=+1037.690931587" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:18:53.624863869 +0000 UTC m=+1091.281982817" watchObservedRunningTime="2026-02-19 08:18:53.628168287 +0000 UTC m=+1091.285287235" Feb 19 08:18:53 crc kubenswrapper[5023]: I0219 08:18:53.823324 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 19 08:18:53 crc kubenswrapper[5023]: W0219 08:18:53.832714 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b0233d3_76a4_4e22_b584_b5ccdc1d82cc.slice/crio-14da8cf6763a4902447f2fe715288c39af0fb7bfbb1b7bb67051c0d86acbc321 WatchSource:0}: Error finding container 14da8cf6763a4902447f2fe715288c39af0fb7bfbb1b7bb67051c0d86acbc321: Status 404 returned error can't find the container with id 14da8cf6763a4902447f2fe715288c39af0fb7bfbb1b7bb67051c0d86acbc321 Feb 19 08:18:54 crc kubenswrapper[5023]: I0219 08:18:54.615776 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerStarted","Data":"14da8cf6763a4902447f2fe715288c39af0fb7bfbb1b7bb67051c0d86acbc321"} Feb 19 08:18:56 crc kubenswrapper[5023]: I0219 08:18:56.631475 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerStarted","Data":"b65cba75887ce3d7d310cc8f7e8faa039d4c2c30227a2bb6bd99126469571f60"} Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.500940 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-mrpbv"] Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.502457 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.504729 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.513675 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-mrpbv"] Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.666916 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.667339 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgwh\" (UniqueName: \"kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.768705 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksgwh\" (UniqueName: \"kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.768862 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.769742 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.798783 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksgwh\" (UniqueName: \"kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh\") pod \"root-account-create-update-mrpbv\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:18:59 crc kubenswrapper[5023]: I0219 08:18:59.819576 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.281938 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-mrpbv"] Feb 19 08:19:00 crc kubenswrapper[5023]: W0219 08:19:00.287721 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58ffbb79_fdf2_40d0_9f7b_09b5d8441476.slice/crio-5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087 WatchSource:0}: Error finding container 5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087: Status 404 returned error can't find the container with id 5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087 Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.655694 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-92rvd"] Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.657550 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.662788 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-mrpbv" event={"ID":"58ffbb79-fdf2-40d0-9f7b-09b5d8441476","Type":"ContainerStarted","Data":"751621a94a5f21e1ed5844eb84af541e4024be153dbed9d516e75c068c368299"} Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.662851 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-mrpbv" event={"ID":"58ffbb79-fdf2-40d0-9f7b-09b5d8441476","Type":"ContainerStarted","Data":"5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087"} Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.664144 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-92rvd"] Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.686270 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.686398 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbxz\" (UniqueName: \"kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.700418 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/root-account-create-update-mrpbv" podStartSLOduration=1.700393267 podStartE2EDuration="1.700393267s" podCreationTimestamp="2026-02-19 08:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:00.698442235 +0000 UTC m=+1098.355561193" watchObservedRunningTime="2026-02-19 08:19:00.700393267 +0000 UTC m=+1098.357512215" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.787659 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtbxz\" (UniqueName: \"kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.788079 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.788920 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.795109 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-e496-account-create-update-q9z6j"] Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.796157 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.799352 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.827604 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtbxz\" (UniqueName: \"kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz\") pod \"keystone-db-create-92rvd\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.862235 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-e496-account-create-update-q9z6j"] Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.889842 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.889912 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvsk\" (UniqueName: \"kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.991162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.991250 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvsk\" (UniqueName: \"kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:00 crc kubenswrapper[5023]: I0219 08:19:00.992098 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.018304 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvsk\" (UniqueName: \"kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk\") pod \"keystone-e496-account-create-update-q9z6j\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.055175 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.111980 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.318632 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-92rvd"] Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.665414 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-e496-account-create-update-q9z6j"] Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.672315 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-92rvd" event={"ID":"52f258e3-c74f-476a-a368-7af467976e2c","Type":"ContainerStarted","Data":"995a0c867704f200b461eb259cd0ebafddaab6254ec87f7565f8111e0f8d3427"} Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.672357 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-92rvd" event={"ID":"52f258e3-c74f-476a-a368-7af467976e2c","Type":"ContainerStarted","Data":"4c807f68749fccb3779a8016ff7d5373ff38e21cc662d0820dc0311508879e4e"} Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.674584 5023 generic.go:334] "Generic (PLEG): container finished" podID="58ffbb79-fdf2-40d0-9f7b-09b5d8441476" containerID="751621a94a5f21e1ed5844eb84af541e4024be153dbed9d516e75c068c368299" exitCode=0 Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.674711 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-mrpbv" event={"ID":"58ffbb79-fdf2-40d0-9f7b-09b5d8441476","Type":"ContainerDied","Data":"751621a94a5f21e1ed5844eb84af541e4024be153dbed9d516e75c068c368299"} Feb 19 08:19:01 crc kubenswrapper[5023]: I0219 08:19:01.686998 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-create-92rvd" podStartSLOduration=1.6869813809999998 podStartE2EDuration="1.686981381s" podCreationTimestamp="2026-02-19 08:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:01.686976531 +0000 UTC m=+1099.344095479" watchObservedRunningTime="2026-02-19 08:19:01.686981381 +0000 UTC m=+1099.344100329" Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.684076 5023 generic.go:334] "Generic (PLEG): container finished" podID="38849549-c4bc-427d-8c0c-53e5d7afd2fa" containerID="e87eda9655712a805c36ed04260e899b3bd65e0b65856cdc07bcd00258ee76ff" exitCode=0 Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.684158 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" event={"ID":"38849549-c4bc-427d-8c0c-53e5d7afd2fa","Type":"ContainerDied","Data":"e87eda9655712a805c36ed04260e899b3bd65e0b65856cdc07bcd00258ee76ff"} Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.684214 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" event={"ID":"38849549-c4bc-427d-8c0c-53e5d7afd2fa","Type":"ContainerStarted","Data":"921a67ef3fe553f910b9891d94bbe363a20b71bd4e732681aa20f241bed20d5d"} Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.686431 5023 generic.go:334] "Generic (PLEG): container finished" podID="52f258e3-c74f-476a-a368-7af467976e2c" containerID="995a0c867704f200b461eb259cd0ebafddaab6254ec87f7565f8111e0f8d3427" exitCode=0 Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.686504 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-92rvd" event={"ID":"52f258e3-c74f-476a-a368-7af467976e2c","Type":"ContainerDied","Data":"995a0c867704f200b461eb259cd0ebafddaab6254ec87f7565f8111e0f8d3427"} Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.688264 5023 generic.go:334] "Generic (PLEG): container finished" podID="7b0233d3-76a4-4e22-b584-b5ccdc1d82cc" containerID="b65cba75887ce3d7d310cc8f7e8faa039d4c2c30227a2bb6bd99126469571f60" exitCode=0 Feb 19 08:19:02 crc kubenswrapper[5023]: I0219 08:19:02.688319 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerDied","Data":"b65cba75887ce3d7d310cc8f7e8faa039d4c2c30227a2bb6bd99126469571f60"} Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.136705 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.329871 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksgwh\" (UniqueName: \"kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh\") pod \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.330051 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts\") pod \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\" (UID: \"58ffbb79-fdf2-40d0-9f7b-09b5d8441476\") " Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.330576 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58ffbb79-fdf2-40d0-9f7b-09b5d8441476" (UID: "58ffbb79-fdf2-40d0-9f7b-09b5d8441476"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.334078 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh" (OuterVolumeSpecName: "kube-api-access-ksgwh") pod "58ffbb79-fdf2-40d0-9f7b-09b5d8441476" (UID: "58ffbb79-fdf2-40d0-9f7b-09b5d8441476"). InnerVolumeSpecName "kube-api-access-ksgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.431959 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksgwh\" (UniqueName: \"kubernetes.io/projected/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-kube-api-access-ksgwh\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.432032 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ffbb79-fdf2-40d0-9f7b-09b5d8441476-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.707068 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerStarted","Data":"86c84043b34c248d93f4c3f1743d402f0af7aea8a6ebe570d3c8d3424f3b337d"} Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.709194 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-mrpbv" event={"ID":"58ffbb79-fdf2-40d0-9f7b-09b5d8441476","Type":"ContainerDied","Data":"5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087"} Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.709230 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5996751ff939fb878fb5709634098c0ccfc389fe7e6302e6dab9df0847aea087" Feb 19 08:19:03 crc kubenswrapper[5023]: I0219 08:19:03.709302 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-mrpbv" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.099263 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.102411 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.242032 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtbxz\" (UniqueName: \"kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz\") pod \"52f258e3-c74f-476a-a368-7af467976e2c\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.242112 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krvsk\" (UniqueName: \"kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk\") pod \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.242281 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts\") pod \"52f258e3-c74f-476a-a368-7af467976e2c\" (UID: \"52f258e3-c74f-476a-a368-7af467976e2c\") " Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.242421 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts\") pod \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\" (UID: \"38849549-c4bc-427d-8c0c-53e5d7afd2fa\") " Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.243357 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38849549-c4bc-427d-8c0c-53e5d7afd2fa" (UID: "38849549-c4bc-427d-8c0c-53e5d7afd2fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.243363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52f258e3-c74f-476a-a368-7af467976e2c" (UID: "52f258e3-c74f-476a-a368-7af467976e2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.243713 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f258e3-c74f-476a-a368-7af467976e2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.243735 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38849549-c4bc-427d-8c0c-53e5d7afd2fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.247135 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk" (OuterVolumeSpecName: "kube-api-access-krvsk") pod "38849549-c4bc-427d-8c0c-53e5d7afd2fa" (UID: "38849549-c4bc-427d-8c0c-53e5d7afd2fa"). InnerVolumeSpecName "kube-api-access-krvsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.247471 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz" (OuterVolumeSpecName: "kube-api-access-vtbxz") pod "52f258e3-c74f-476a-a368-7af467976e2c" (UID: "52f258e3-c74f-476a-a368-7af467976e2c"). InnerVolumeSpecName "kube-api-access-vtbxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.344929 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtbxz\" (UniqueName: \"kubernetes.io/projected/52f258e3-c74f-476a-a368-7af467976e2c-kube-api-access-vtbxz\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.344964 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krvsk\" (UniqueName: \"kubernetes.io/projected/38849549-c4bc-427d-8c0c-53e5d7afd2fa-kube-api-access-krvsk\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.722515 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-92rvd" event={"ID":"52f258e3-c74f-476a-a368-7af467976e2c","Type":"ContainerDied","Data":"4c807f68749fccb3779a8016ff7d5373ff38e21cc662d0820dc0311508879e4e"} Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.722563 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c807f68749fccb3779a8016ff7d5373ff38e21cc662d0820dc0311508879e4e" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.722534 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-92rvd" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.724601 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" event={"ID":"38849549-c4bc-427d-8c0c-53e5d7afd2fa","Type":"ContainerDied","Data":"921a67ef3fe553f910b9891d94bbe363a20b71bd4e732681aa20f241bed20d5d"} Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.724635 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921a67ef3fe553f910b9891d94bbe363a20b71bd4e732681aa20f241bed20d5d" Feb 19 08:19:04 crc kubenswrapper[5023]: I0219 08:19:04.724766 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-e496-account-create-update-q9z6j" Feb 19 08:19:05 crc kubenswrapper[5023]: I0219 08:19:05.736258 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerStarted","Data":"80fea2bbede1ae1ea0a8e576ff26e4df7bc10ffe078c7ecbe4a50fe675a30cf4"} Feb 19 08:19:05 crc kubenswrapper[5023]: I0219 08:19:05.736695 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"7b0233d3-76a4-4e22-b584-b5ccdc1d82cc","Type":"ContainerStarted","Data":"e3a6e920ccb8ee90516fda9dfaea487982e081a19f5d51dee236d64080cbea3b"} Feb 19 08:19:05 crc kubenswrapper[5023]: I0219 08:19:05.782379 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=13.78235467 podStartE2EDuration="13.78235467s" podCreationTimestamp="2026-02-19 08:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:05.771863432 +0000 UTC m=+1103.428982380" watchObservedRunningTime="2026-02-19 08:19:05.78235467 +0000 UTC m=+1103.439473628" Feb 19 08:19:08 crc kubenswrapper[5023]: I0219 08:19:08.349872 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:19:08 crc kubenswrapper[5023]: I0219 08:19:08.349935 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:19:08 crc kubenswrapper[5023]: I0219 08:19:08.365161 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:19:08 crc kubenswrapper[5023]: I0219 08:19:08.761924 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.165831 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.547906 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.765694 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-98gh5"] Feb 19 08:19:09 crc kubenswrapper[5023]: E0219 08:19:09.765992 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ffbb79-fdf2-40d0-9f7b-09b5d8441476" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766008 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ffbb79-fdf2-40d0-9f7b-09b5d8441476" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: E0219 08:19:09.766035 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f258e3-c74f-476a-a368-7af467976e2c" containerName="mariadb-database-create" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766041 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f258e3-c74f-476a-a368-7af467976e2c" containerName="mariadb-database-create" Feb 19 08:19:09 crc kubenswrapper[5023]: E0219 08:19:09.766071 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38849549-c4bc-427d-8c0c-53e5d7afd2fa" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766078 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="38849549-c4bc-427d-8c0c-53e5d7afd2fa" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766208 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="38849549-c4bc-427d-8c0c-53e5d7afd2fa" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766225 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f258e3-c74f-476a-a368-7af467976e2c" containerName="mariadb-database-create" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766232 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ffbb79-fdf2-40d0-9f7b-09b5d8441476" containerName="mariadb-account-create-update" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.766775 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.782573 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.783288 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.783491 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.783753 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-9xvq4" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.789410 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-98gh5"] Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.865240 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.865323 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlt7\" (UniqueName: \"kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.865396 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.966850 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlt7\" (UniqueName: \"kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.966915 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.967000 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.972849 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.978640 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:09 crc kubenswrapper[5023]: I0219 08:19:09.982911 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlt7\" (UniqueName: \"kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7\") pod \"keystone-db-sync-98gh5\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:10 crc kubenswrapper[5023]: I0219 08:19:10.091665 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:10 crc kubenswrapper[5023]: I0219 08:19:10.586431 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-98gh5"] Feb 19 08:19:10 crc kubenswrapper[5023]: W0219 08:19:10.597293 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9007a92_1ba7_475f_a227_a36537264ead.slice/crio-fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801 WatchSource:0}: Error finding container fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801: Status 404 returned error can't find the container with id fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801 Feb 19 08:19:10 crc kubenswrapper[5023]: I0219 08:19:10.774061 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-98gh5" event={"ID":"c9007a92-1ba7-475f-a227-a36537264ead","Type":"ContainerStarted","Data":"fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801"} Feb 19 08:19:19 crc kubenswrapper[5023]: I0219 08:19:19.849773 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-98gh5" event={"ID":"c9007a92-1ba7-475f-a227-a36537264ead","Type":"ContainerStarted","Data":"3349f3266e81227735ec32860d68cfdbcd84a69b2152c231b841d1d6fe3eadbf"} Feb 19 08:19:19 crc kubenswrapper[5023]: I0219 08:19:19.889144 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-98gh5" podStartSLOduration=2.728370993 podStartE2EDuration="10.889121955s" podCreationTimestamp="2026-02-19 08:19:09 +0000 UTC" firstStartedPulling="2026-02-19 08:19:10.599447678 +0000 UTC m=+1108.256566626" lastFinishedPulling="2026-02-19 08:19:18.76019864 +0000 UTC m=+1116.417317588" observedRunningTime="2026-02-19 08:19:19.878998667 +0000 UTC m=+1117.536117625" watchObservedRunningTime="2026-02-19 08:19:19.889121955 +0000 UTC m=+1117.546240903" Feb 19 08:19:22 crc kubenswrapper[5023]: I0219 08:19:22.888925 5023 generic.go:334] "Generic (PLEG): container finished" podID="c9007a92-1ba7-475f-a227-a36537264ead" containerID="3349f3266e81227735ec32860d68cfdbcd84a69b2152c231b841d1d6fe3eadbf" exitCode=0 Feb 19 08:19:22 crc kubenswrapper[5023]: I0219 08:19:22.889182 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-98gh5" event={"ID":"c9007a92-1ba7-475f-a227-a36537264ead","Type":"ContainerDied","Data":"3349f3266e81227735ec32860d68cfdbcd84a69b2152c231b841d1d6fe3eadbf"} Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.219459 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.311816 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle\") pod \"c9007a92-1ba7-475f-a227-a36537264ead\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.311946 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlt7\" (UniqueName: \"kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7\") pod \"c9007a92-1ba7-475f-a227-a36537264ead\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.312022 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data\") pod \"c9007a92-1ba7-475f-a227-a36537264ead\" (UID: \"c9007a92-1ba7-475f-a227-a36537264ead\") " Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.318727 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7" (OuterVolumeSpecName: "kube-api-access-pvlt7") pod "c9007a92-1ba7-475f-a227-a36537264ead" (UID: "c9007a92-1ba7-475f-a227-a36537264ead"). InnerVolumeSpecName "kube-api-access-pvlt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.338948 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9007a92-1ba7-475f-a227-a36537264ead" (UID: "c9007a92-1ba7-475f-a227-a36537264ead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.354027 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data" (OuterVolumeSpecName: "config-data") pod "c9007a92-1ba7-475f-a227-a36537264ead" (UID: "c9007a92-1ba7-475f-a227-a36537264ead"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.413829 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.413869 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlt7\" (UniqueName: \"kubernetes.io/projected/c9007a92-1ba7-475f-a227-a36537264ead-kube-api-access-pvlt7\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.413881 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9007a92-1ba7-475f-a227-a36537264ead-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.905700 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-98gh5" event={"ID":"c9007a92-1ba7-475f-a227-a36537264ead","Type":"ContainerDied","Data":"fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801"} Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.906071 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd787b79ad44ddb240f4b8891d833377412d3148040bb74d0bea61cc46f30801" Feb 19 08:19:24 crc kubenswrapper[5023]: I0219 08:19:24.905856 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-98gh5" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.105596 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-dj7j7"] Feb 19 08:19:25 crc kubenswrapper[5023]: E0219 08:19:25.113498 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9007a92-1ba7-475f-a227-a36537264ead" containerName="keystone-db-sync" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.113523 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9007a92-1ba7-475f-a227-a36537264ead" containerName="keystone-db-sync" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.113701 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9007a92-1ba7-475f-a227-a36537264ead" containerName="keystone-db-sync" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.114266 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.116893 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.117103 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.117495 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-9xvq4" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.117796 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.117834 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.125054 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-dj7j7"] Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225661 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcqc\" (UniqueName: \"kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225771 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225818 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225836 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225860 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.225883 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.271065 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.273035 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.274780 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.275196 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.280659 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327764 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327826 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrc2\" (UniqueName: \"kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327852 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327895 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327937 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327952 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.327979 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcqc\" (UniqueName: \"kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328001 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328029 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328047 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328069 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328097 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.328115 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.332783 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.333037 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.333202 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.333672 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.335381 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.349721 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcqc\" (UniqueName: \"kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc\") pod \"keystone-bootstrap-dj7j7\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430170 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430260 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430306 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430330 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430392 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrc2\" (UniqueName: \"kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430417 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.430467 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.431160 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.431245 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.434210 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.436477 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.443801 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.444603 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.444710 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.467517 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrc2\" (UniqueName: \"kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2\") pod \"ceilometer-0\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.594118 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:25 crc kubenswrapper[5023]: I0219 08:19:25.964653 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-dj7j7"] Feb 19 08:19:26 crc kubenswrapper[5023]: W0219 08:19:26.251132 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce84b34f_bbc3_4d96_b0c5_2ac0b0781d40.slice/crio-21128869ec20c2af27d81d0cf20c818891b5940e84c7ca1df6035a45a664ceaf WatchSource:0}: Error finding container 21128869ec20c2af27d81d0cf20c818891b5940e84c7ca1df6035a45a664ceaf: Status 404 returned error can't find the container with id 21128869ec20c2af27d81d0cf20c818891b5940e84c7ca1df6035a45a664ceaf Feb 19 08:19:26 crc kubenswrapper[5023]: I0219 08:19:26.252363 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:26 crc kubenswrapper[5023]: I0219 08:19:26.937410 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" event={"ID":"21f85e58-79cf-4eb4-b071-8ee23de02c18","Type":"ContainerStarted","Data":"75cb2f68365f7788ce99693e560fb0734677d2d45ef58b955c48f6366a6dd46b"} Feb 19 08:19:26 crc kubenswrapper[5023]: I0219 08:19:26.937737 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" event={"ID":"21f85e58-79cf-4eb4-b071-8ee23de02c18","Type":"ContainerStarted","Data":"8a2b2de399c3131623041615704b2c47a9a7a29dd3aa0792cf9417c64215d6f5"} Feb 19 08:19:26 crc kubenswrapper[5023]: I0219 08:19:26.940118 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerStarted","Data":"21128869ec20c2af27d81d0cf20c818891b5940e84c7ca1df6035a45a664ceaf"} Feb 19 08:19:26 crc kubenswrapper[5023]: I0219 08:19:26.959961 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" podStartSLOduration=1.959933975 podStartE2EDuration="1.959933975s" podCreationTimestamp="2026-02-19 08:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:26.955806175 +0000 UTC m=+1124.612925123" watchObservedRunningTime="2026-02-19 08:19:26.959933975 +0000 UTC m=+1124.617052923" Feb 19 08:19:27 crc kubenswrapper[5023]: I0219 08:19:27.392919 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:29 crc kubenswrapper[5023]: I0219 08:19:29.968350 5023 generic.go:334] "Generic (PLEG): container finished" podID="21f85e58-79cf-4eb4-b071-8ee23de02c18" containerID="75cb2f68365f7788ce99693e560fb0734677d2d45ef58b955c48f6366a6dd46b" exitCode=0 Feb 19 08:19:29 crc kubenswrapper[5023]: I0219 08:19:29.968437 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" event={"ID":"21f85e58-79cf-4eb4-b071-8ee23de02c18","Type":"ContainerDied","Data":"75cb2f68365f7788ce99693e560fb0734677d2d45ef58b955c48f6366a6dd46b"} Feb 19 08:19:30 crc kubenswrapper[5023]: I0219 08:19:30.979250 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerStarted","Data":"16b12e41c566bcd72a08a3d5e9df6b1388349d767ddaf06e18db4ae3ac747014"} Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.380678 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.447934 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.448026 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.448046 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.448105 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.448173 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.448197 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqcqc\" (UniqueName: \"kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc\") pod \"21f85e58-79cf-4eb4-b071-8ee23de02c18\" (UID: \"21f85e58-79cf-4eb4-b071-8ee23de02c18\") " Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.456779 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts" (OuterVolumeSpecName: "scripts") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.457212 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.457382 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.468606 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc" (OuterVolumeSpecName: "kube-api-access-fqcqc") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "kube-api-access-fqcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.487362 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data" (OuterVolumeSpecName: "config-data") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.493796 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21f85e58-79cf-4eb4-b071-8ee23de02c18" (UID: "21f85e58-79cf-4eb4-b071-8ee23de02c18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549476 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549503 5023 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549512 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549520 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549529 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqcqc\" (UniqueName: \"kubernetes.io/projected/21f85e58-79cf-4eb4-b071-8ee23de02c18-kube-api-access-fqcqc\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.549538 5023 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/21f85e58-79cf-4eb4-b071-8ee23de02c18-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.987606 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerStarted","Data":"e799ebb77b28389ff2f7fea145fb9e80569cd8063e2292c5801bd40f831d6bc9"} Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.989687 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" event={"ID":"21f85e58-79cf-4eb4-b071-8ee23de02c18","Type":"ContainerDied","Data":"8a2b2de399c3131623041615704b2c47a9a7a29dd3aa0792cf9417c64215d6f5"} Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.989728 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2b2de399c3131623041615704b2c47a9a7a29dd3aa0792cf9417c64215d6f5" Feb 19 08:19:31 crc kubenswrapper[5023]: I0219 08:19:31.989729 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-dj7j7" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.057760 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-dj7j7"] Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.082330 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-dj7j7"] Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.205593 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-gxdj8"] Feb 19 08:19:32 crc kubenswrapper[5023]: E0219 08:19:32.205934 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f85e58-79cf-4eb4-b071-8ee23de02c18" containerName="keystone-bootstrap" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.205949 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f85e58-79cf-4eb4-b071-8ee23de02c18" containerName="keystone-bootstrap" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.206086 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f85e58-79cf-4eb4-b071-8ee23de02c18" containerName="keystone-bootstrap" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.206667 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.210199 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.210848 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.210856 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.211239 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-9xvq4" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.212006 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.244183 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-gxdj8"] Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.260753 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.260815 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.260892 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.260929 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.261079 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp24k\" (UniqueName: \"kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.261218 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.362806 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.362958 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.362982 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.363007 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.363043 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.363766 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp24k\" (UniqueName: \"kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.372446 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.376029 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.376297 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.378396 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.379046 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.388130 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp24k\" (UniqueName: \"kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k\") pod \"keystone-bootstrap-gxdj8\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.522775 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:32 crc kubenswrapper[5023]: I0219 08:19:32.993514 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-gxdj8"] Feb 19 08:19:33 crc kubenswrapper[5023]: W0219 08:19:33.009309 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72e083b8_6cdf_4a4a_9bb6_e7f20b6d5ffb.slice/crio-9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad WatchSource:0}: Error finding container 9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad: Status 404 returned error can't find the container with id 9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad Feb 19 08:19:33 crc kubenswrapper[5023]: I0219 08:19:33.495536 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f85e58-79cf-4eb4-b071-8ee23de02c18" path="/var/lib/kubelet/pods/21f85e58-79cf-4eb4-b071-8ee23de02c18/volumes" Feb 19 08:19:34 crc kubenswrapper[5023]: I0219 08:19:34.016637 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" event={"ID":"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb","Type":"ContainerStarted","Data":"e2090a2fe9cd695f306d0ca2b7f4ac7fcb7f16d4927f7809a6eb669cde1890ea"} Feb 19 08:19:34 crc kubenswrapper[5023]: I0219 08:19:34.017045 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" event={"ID":"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb","Type":"ContainerStarted","Data":"9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad"} Feb 19 08:19:34 crc kubenswrapper[5023]: I0219 08:19:34.048229 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" podStartSLOduration=2.048204989 podStartE2EDuration="2.048204989s" podCreationTimestamp="2026-02-19 08:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:34.041917613 +0000 UTC m=+1131.699036561" watchObservedRunningTime="2026-02-19 08:19:34.048204989 +0000 UTC m=+1131.705323937" Feb 19 08:19:36 crc kubenswrapper[5023]: I0219 08:19:36.034114 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerStarted","Data":"e04774edbb146842abbbe983e35598a9c1b2a0dc345a4a60998e95af7fe94eaf"} Feb 19 08:19:37 crc kubenswrapper[5023]: I0219 08:19:37.043484 5023 generic.go:334] "Generic (PLEG): container finished" podID="72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" containerID="e2090a2fe9cd695f306d0ca2b7f4ac7fcb7f16d4927f7809a6eb669cde1890ea" exitCode=0 Feb 19 08:19:37 crc kubenswrapper[5023]: I0219 08:19:37.043541 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" event={"ID":"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb","Type":"ContainerDied","Data":"e2090a2fe9cd695f306d0ca2b7f4ac7fcb7f16d4927f7809a6eb669cde1890ea"} Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.371651 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.497511 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp24k\" (UniqueName: \"kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.498087 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.498234 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.498314 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.498332 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.498356 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle\") pod \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\" (UID: \"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb\") " Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.503630 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.503679 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k" (OuterVolumeSpecName: "kube-api-access-dp24k") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "kube-api-access-dp24k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.508716 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts" (OuterVolumeSpecName: "scripts") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.508795 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.520958 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data" (OuterVolumeSpecName: "config-data") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.525893 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" (UID: "72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600341 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp24k\" (UniqueName: \"kubernetes.io/projected/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-kube-api-access-dp24k\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600376 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600386 5023 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600395 5023 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600408 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:38 crc kubenswrapper[5023]: I0219 08:19:38.600417 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.064228 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" event={"ID":"72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb","Type":"ContainerDied","Data":"9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad"} Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.064271 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef9df6fac4297895ccbec95a014c9c54a92792e6601ffc29f5d14b0aa8ffaad" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.064294 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-gxdj8" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.249225 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:19:39 crc kubenswrapper[5023]: E0219 08:19:39.249646 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" containerName="keystone-bootstrap" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.249673 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" containerName="keystone-bootstrap" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.249876 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" containerName="keystone-bootstrap" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.251300 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.256952 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-9xvq4" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.257209 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.257342 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.257450 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.257992 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.259917 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.270850 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315151 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315228 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315248 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5st8\" (UniqueName: \"kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315344 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315466 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315535 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315591 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.315649 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417758 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417831 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417855 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5st8\" (UniqueName: \"kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417906 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417964 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.417993 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.418013 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.423469 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.423471 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.423512 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.423673 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.425211 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.428084 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.429344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.441078 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5st8\" (UniqueName: \"kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8\") pod \"keystone-6cc7b947df-92tm2\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:39 crc kubenswrapper[5023]: I0219 08:19:39.573115 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:44 crc kubenswrapper[5023]: W0219 08:19:44.537814 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a18ac94_c6b6_40f7_bf4f_907dad15e61b.slice/crio-9ee4f80e640355fae47439b79ef71e205fca29debb168a5cef741ae1a92ed731 WatchSource:0}: Error finding container 9ee4f80e640355fae47439b79ef71e205fca29debb168a5cef741ae1a92ed731: Status 404 returned error can't find the container with id 9ee4f80e640355fae47439b79ef71e205fca29debb168a5cef741ae1a92ed731 Feb 19 08:19:44 crc kubenswrapper[5023]: I0219 08:19:44.542706 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.120821 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerStarted","Data":"a3581caa4d6ecc5eb7fe35a01b9998d2a1a8e827d78d8a1ee39410b47cbd8a84"} Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.121149 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.121004 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="sg-core" containerID="cri-o://e04774edbb146842abbbe983e35598a9c1b2a0dc345a4a60998e95af7fe94eaf" gracePeriod=30 Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.120902 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-central-agent" containerID="cri-o://16b12e41c566bcd72a08a3d5e9df6b1388349d767ddaf06e18db4ae3ac747014" gracePeriod=30 Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.121080 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-notification-agent" containerID="cri-o://e799ebb77b28389ff2f7fea145fb9e80569cd8063e2292c5801bd40f831d6bc9" gracePeriod=30 Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.121066 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="proxy-httpd" containerID="cri-o://a3581caa4d6ecc5eb7fe35a01b9998d2a1a8e827d78d8a1ee39410b47cbd8a84" gracePeriod=30 Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.129504 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" event={"ID":"3a18ac94-c6b6-40f7-bf4f-907dad15e61b","Type":"ContainerStarted","Data":"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010"} Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.129551 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" event={"ID":"3a18ac94-c6b6-40f7-bf4f-907dad15e61b","Type":"ContainerStarted","Data":"9ee4f80e640355fae47439b79ef71e205fca29debb168a5cef741ae1a92ed731"} Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.129830 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.152867 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.192403273 podStartE2EDuration="20.152842357s" podCreationTimestamp="2026-02-19 08:19:25 +0000 UTC" firstStartedPulling="2026-02-19 08:19:26.254432506 +0000 UTC m=+1123.911551444" lastFinishedPulling="2026-02-19 08:19:44.21487158 +0000 UTC m=+1141.871990528" observedRunningTime="2026-02-19 08:19:45.143193291 +0000 UTC m=+1142.800312239" watchObservedRunningTime="2026-02-19 08:19:45.152842357 +0000 UTC m=+1142.809961315" Feb 19 08:19:45 crc kubenswrapper[5023]: I0219 08:19:45.163558 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" podStartSLOduration=6.16353599 podStartE2EDuration="6.16353599s" podCreationTimestamp="2026-02-19 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:19:45.162582815 +0000 UTC m=+1142.819701763" watchObservedRunningTime="2026-02-19 08:19:45.16353599 +0000 UTC m=+1142.820654938" Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.139221 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerID="a3581caa4d6ecc5eb7fe35a01b9998d2a1a8e827d78d8a1ee39410b47cbd8a84" exitCode=0 Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.139496 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerID="e04774edbb146842abbbe983e35598a9c1b2a0dc345a4a60998e95af7fe94eaf" exitCode=2 Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.139509 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerID="16b12e41c566bcd72a08a3d5e9df6b1388349d767ddaf06e18db4ae3ac747014" exitCode=0 Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.139301 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerDied","Data":"a3581caa4d6ecc5eb7fe35a01b9998d2a1a8e827d78d8a1ee39410b47cbd8a84"} Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.140225 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerDied","Data":"e04774edbb146842abbbe983e35598a9c1b2a0dc345a4a60998e95af7fe94eaf"} Feb 19 08:19:46 crc kubenswrapper[5023]: I0219 08:19:46.140236 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerDied","Data":"16b12e41c566bcd72a08a3d5e9df6b1388349d767ddaf06e18db4ae3ac747014"} Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.173819 5023 generic.go:334] "Generic (PLEG): container finished" podID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerID="e799ebb77b28389ff2f7fea145fb9e80569cd8063e2292c5801bd40f831d6bc9" exitCode=0 Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.173896 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerDied","Data":"e799ebb77b28389ff2f7fea145fb9e80569cd8063e2292c5801bd40f831d6bc9"} Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.472712 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.586986 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587054 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587177 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587240 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587274 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587321 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.587384 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdrc2\" (UniqueName: \"kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2\") pod \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\" (UID: \"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40\") " Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.588435 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.588588 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.593085 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2" (OuterVolumeSpecName: "kube-api-access-wdrc2") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "kube-api-access-wdrc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.599115 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts" (OuterVolumeSpecName: "scripts") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.618075 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.656463 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690203 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690276 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690293 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690307 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690319 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdrc2\" (UniqueName: \"kubernetes.io/projected/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-kube-api-access-wdrc2\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.690331 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.692086 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data" (OuterVolumeSpecName: "config-data") pod "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" (UID: "ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:49 crc kubenswrapper[5023]: I0219 08:19:49.791276 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.184395 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40","Type":"ContainerDied","Data":"21128869ec20c2af27d81d0cf20c818891b5940e84c7ca1df6035a45a664ceaf"} Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.184454 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.185888 5023 scope.go:117] "RemoveContainer" containerID="a3581caa4d6ecc5eb7fe35a01b9998d2a1a8e827d78d8a1ee39410b47cbd8a84" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.209534 5023 scope.go:117] "RemoveContainer" containerID="e04774edbb146842abbbe983e35598a9c1b2a0dc345a4a60998e95af7fe94eaf" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.228272 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.240798 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.247928 5023 scope.go:117] "RemoveContainer" containerID="e799ebb77b28389ff2f7fea145fb9e80569cd8063e2292c5801bd40f831d6bc9" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.253725 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:50 crc kubenswrapper[5023]: E0219 08:19:50.254309 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-notification-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254334 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-notification-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: E0219 08:19:50.254392 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-central-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254409 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-central-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: E0219 08:19:50.254428 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="proxy-httpd" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254436 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="proxy-httpd" Feb 19 08:19:50 crc kubenswrapper[5023]: E0219 08:19:50.254476 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="sg-core" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254486 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="sg-core" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254759 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="sg-core" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254804 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-central-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254825 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="proxy-httpd" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.254844 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" containerName="ceilometer-notification-agent" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.259130 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.262809 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.264839 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.280500 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.286703 5023 scope.go:117] "RemoveContainer" containerID="16b12e41c566bcd72a08a3d5e9df6b1388349d767ddaf06e18db4ae3ac747014" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.341302 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:50 crc kubenswrapper[5023]: E0219 08:19:50.342050 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-nn4qc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-nn4qc log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="fbb32914-c7eb-4bcc-ab78-227633bd0479" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402481 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402534 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402590 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402671 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402692 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402719 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.402762 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4qc\" (UniqueName: \"kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.503812 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.503866 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.503917 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.503964 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.503987 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.504198 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.504368 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4qc\" (UniqueName: \"kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.504633 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.504693 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.509945 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.510492 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.511581 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.518460 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:50 crc kubenswrapper[5023]: I0219 08:19:50.521338 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4qc\" (UniqueName: \"kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc\") pod \"ceilometer-0\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.194432 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.212545 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.317977 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318051 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318081 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn4qc\" (UniqueName: \"kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318105 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318164 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318214 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318435 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318682 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.318722 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle\") pod \"fbb32914-c7eb-4bcc-ab78-227633bd0479\" (UID: \"fbb32914-c7eb-4bcc-ab78-227633bd0479\") " Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.319027 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.319050 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fbb32914-c7eb-4bcc-ab78-227633bd0479-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.321753 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.321783 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc" (OuterVolumeSpecName: "kube-api-access-nn4qc") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "kube-api-access-nn4qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.322080 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.322447 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts" (OuterVolumeSpecName: "scripts") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.323496 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data" (OuterVolumeSpecName: "config-data") pod "fbb32914-c7eb-4bcc-ab78-227633bd0479" (UID: "fbb32914-c7eb-4bcc-ab78-227633bd0479"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.420483 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.420517 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.420528 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn4qc\" (UniqueName: \"kubernetes.io/projected/fbb32914-c7eb-4bcc-ab78-227633bd0479-kube-api-access-nn4qc\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.420539 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.420548 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb32914-c7eb-4bcc-ab78-227633bd0479-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:19:51 crc kubenswrapper[5023]: I0219 08:19:51.488046 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40" path="/var/lib/kubelet/pods/ce84b34f-bbc3-4d96-b0c5-2ac0b0781d40/volumes" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.201083 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.269013 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.286721 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.305265 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.307116 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.309148 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.309906 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.315481 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.442917 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.442975 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.443365 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.443575 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.443684 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.443751 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.443801 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddh5\" (UniqueName: \"kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.545232 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.545291 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.545395 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.546078 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.546151 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.546222 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.546278 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ddh5\" (UniqueName: \"kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.546839 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.547012 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.551189 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.555248 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.555958 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.556034 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.564501 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ddh5\" (UniqueName: \"kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5\") pod \"ceilometer-0\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:52 crc kubenswrapper[5023]: I0219 08:19:52.622691 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:53 crc kubenswrapper[5023]: I0219 08:19:53.038329 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:19:53 crc kubenswrapper[5023]: I0219 08:19:53.221040 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerStarted","Data":"664c025d9cb3fb65aef30263fbf828a2952b6d822ab843bfe728cb1da5c74b23"} Feb 19 08:19:53 crc kubenswrapper[5023]: I0219 08:19:53.486213 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb32914-c7eb-4bcc-ab78-227633bd0479" path="/var/lib/kubelet/pods/fbb32914-c7eb-4bcc-ab78-227633bd0479/volumes" Feb 19 08:19:54 crc kubenswrapper[5023]: I0219 08:19:54.229378 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerStarted","Data":"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a"} Feb 19 08:19:55 crc kubenswrapper[5023]: I0219 08:19:55.237064 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerStarted","Data":"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea"} Feb 19 08:19:55 crc kubenswrapper[5023]: I0219 08:19:55.237305 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerStarted","Data":"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386"} Feb 19 08:19:57 crc kubenswrapper[5023]: I0219 08:19:57.255881 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerStarted","Data":"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315"} Feb 19 08:19:57 crc kubenswrapper[5023]: I0219 08:19:57.256346 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:19:57 crc kubenswrapper[5023]: I0219 08:19:57.278259 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.758551907 podStartE2EDuration="5.278243125s" podCreationTimestamp="2026-02-19 08:19:52 +0000 UTC" firstStartedPulling="2026-02-19 08:19:53.055110082 +0000 UTC m=+1150.712229030" lastFinishedPulling="2026-02-19 08:19:56.57480129 +0000 UTC m=+1154.231920248" observedRunningTime="2026-02-19 08:19:57.277339891 +0000 UTC m=+1154.934458849" watchObservedRunningTime="2026-02-19 08:19:57.278243125 +0000 UTC m=+1154.935362073" Feb 19 08:20:11 crc kubenswrapper[5023]: I0219 08:20:11.257079 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.666598 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.676257 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.749009 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.749257 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-kvjgm" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.749397 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.752380 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwp6p\" (UniqueName: \"kubernetes.io/projected/94aa582c-4929-4dcc-9de1-083027faf8b1-kube-api-access-nwp6p\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.752464 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.752526 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.752633 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.765423 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.854544 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwp6p\" (UniqueName: \"kubernetes.io/projected/94aa582c-4929-4dcc-9de1-083027faf8b1-kube-api-access-nwp6p\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.854609 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.854662 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.854702 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.855568 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.862265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.862531 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94aa582c-4929-4dcc-9de1-083027faf8b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:15 crc kubenswrapper[5023]: I0219 08:20:15.871221 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwp6p\" (UniqueName: \"kubernetes.io/projected/94aa582c-4929-4dcc-9de1-083027faf8b1-kube-api-access-nwp6p\") pod \"openstackclient\" (UID: \"94aa582c-4929-4dcc-9de1-083027faf8b1\") " pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:16 crc kubenswrapper[5023]: I0219 08:20:16.067440 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Feb 19 08:20:16 crc kubenswrapper[5023]: I0219 08:20:16.700649 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 19 08:20:17 crc kubenswrapper[5023]: I0219 08:20:17.422673 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"94aa582c-4929-4dcc-9de1-083027faf8b1","Type":"ContainerStarted","Data":"69dedcb3937d7418d7788ff49e4b3af919acec08aa5fd28a0936503da12696c2"} Feb 19 08:20:22 crc kubenswrapper[5023]: I0219 08:20:22.627537 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:25 crc kubenswrapper[5023]: I0219 08:20:25.899039 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:25 crc kubenswrapper[5023]: I0219 08:20:25.899516 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" containerName="kube-state-metrics" containerID="cri-o://c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb" gracePeriod=30 Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.487951 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.516825 5023 generic.go:334] "Generic (PLEG): container finished" podID="bf5cf887-738e-45c4-92c8-957b9b434877" containerID="c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb" exitCode=2 Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.516884 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"bf5cf887-738e-45c4-92c8-957b9b434877","Type":"ContainerDied","Data":"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb"} Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.516926 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"bf5cf887-738e-45c4-92c8-957b9b434877","Type":"ContainerDied","Data":"e261f0506c24acb1c31d1894aac117c1b41d64e86230e2792356cb0afe7b3867"} Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.516951 5023 scope.go:117] "RemoveContainer" containerID="c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.517080 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.547462 5023 scope.go:117] "RemoveContainer" containerID="c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb" Feb 19 08:20:26 crc kubenswrapper[5023]: E0219 08:20:26.550003 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb\": container with ID starting with c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb not found: ID does not exist" containerID="c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.550043 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb"} err="failed to get container status \"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb\": rpc error: code = NotFound desc = could not find container \"c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb\": container with ID starting with c19e1b036b0636791608fd3fe012eaf74a1c5adb3308f055ce28c0bed34c47bb not found: ID does not exist" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.661370 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cvnv\" (UniqueName: \"kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv\") pod \"bf5cf887-738e-45c4-92c8-957b9b434877\" (UID: \"bf5cf887-738e-45c4-92c8-957b9b434877\") " Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.666182 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv" (OuterVolumeSpecName: "kube-api-access-7cvnv") pod "bf5cf887-738e-45c4-92c8-957b9b434877" (UID: "bf5cf887-738e-45c4-92c8-957b9b434877"). InnerVolumeSpecName "kube-api-access-7cvnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.763889 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cvnv\" (UniqueName: \"kubernetes.io/projected/bf5cf887-738e-45c4-92c8-957b9b434877-kube-api-access-7cvnv\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.844257 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.853500 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.870639 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:26 crc kubenswrapper[5023]: E0219 08:20:26.870992 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" containerName="kube-state-metrics" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.871008 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" containerName="kube-state-metrics" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.871181 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" containerName="kube-state-metrics" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.871812 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.876884 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.877058 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.880943 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.990483 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.990752 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-central-agent" containerID="cri-o://bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a" gracePeriod=30 Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.990835 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="proxy-httpd" containerID="cri-o://6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315" gracePeriod=30 Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.990834 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="sg-core" containerID="cri-o://d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea" gracePeriod=30 Feb 19 08:20:26 crc kubenswrapper[5023]: I0219 08:20:26.990900 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-notification-agent" containerID="cri-o://d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386" gracePeriod=30 Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.070004 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.070053 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgwn4\" (UniqueName: \"kubernetes.io/projected/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-api-access-cgwn4\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.070288 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.070426 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.172209 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.172310 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.172357 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.172388 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgwn4\" (UniqueName: \"kubernetes.io/projected/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-api-access-cgwn4\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.177406 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.177459 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.185163 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.189407 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgwn4\" (UniqueName: \"kubernetes.io/projected/ba186aeb-8303-4be0-b6a1-ba2b8de453a5-kube-api-access-cgwn4\") pod \"kube-state-metrics-0\" (UID: \"ba186aeb-8303-4be0-b6a1-ba2b8de453a5\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.485857 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.487530 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5cf887-738e-45c4-92c8-957b9b434877" path="/var/lib/kubelet/pods/bf5cf887-738e-45c4-92c8-957b9b434877/volumes" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527176 5023 generic.go:334] "Generic (PLEG): container finished" podID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerID="6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315" exitCode=0 Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527214 5023 generic.go:334] "Generic (PLEG): container finished" podID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerID="d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea" exitCode=2 Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527225 5023 generic.go:334] "Generic (PLEG): container finished" podID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerID="bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a" exitCode=0 Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527214 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerDied","Data":"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315"} Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527307 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerDied","Data":"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea"} Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.527325 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerDied","Data":"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a"} Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.528811 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"94aa582c-4929-4dcc-9de1-083027faf8b1","Type":"ContainerStarted","Data":"e6ef961cdc83562f1a8e6f6eef85800d2d334f38c1ab94b341f42c7c63c56894"} Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.548292 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.945662874 podStartE2EDuration="12.548276703s" podCreationTimestamp="2026-02-19 08:20:15 +0000 UTC" firstStartedPulling="2026-02-19 08:20:16.707827284 +0000 UTC m=+1174.364946232" lastFinishedPulling="2026-02-19 08:20:26.310441113 +0000 UTC m=+1183.967560061" observedRunningTime="2026-02-19 08:20:27.544988316 +0000 UTC m=+1185.202107264" watchObservedRunningTime="2026-02-19 08:20:27.548276703 +0000 UTC m=+1185.205395651" Feb 19 08:20:27 crc kubenswrapper[5023]: I0219 08:20:27.940715 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 19 08:20:28 crc kubenswrapper[5023]: I0219 08:20:28.540839 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"ba186aeb-8303-4be0-b6a1-ba2b8de453a5","Type":"ContainerStarted","Data":"b4307843f0a953801356905615f5425da8d2a72163b8ff62c507ca9885246c72"} Feb 19 08:20:28 crc kubenswrapper[5023]: I0219 08:20:28.541193 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"ba186aeb-8303-4be0-b6a1-ba2b8de453a5","Type":"ContainerStarted","Data":"7166c3fe0411e4ac8a78272b5e64b830b72245ae2293913e9b02b6c5cf99fc23"} Feb 19 08:20:28 crc kubenswrapper[5023]: I0219 08:20:28.563408 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.200985735 podStartE2EDuration="2.563390575s" podCreationTimestamp="2026-02-19 08:20:26 +0000 UTC" firstStartedPulling="2026-02-19 08:20:27.949534043 +0000 UTC m=+1185.606652991" lastFinishedPulling="2026-02-19 08:20:28.311938883 +0000 UTC m=+1185.969057831" observedRunningTime="2026-02-19 08:20:28.560082787 +0000 UTC m=+1186.217201735" watchObservedRunningTime="2026-02-19 08:20:28.563390575 +0000 UTC m=+1186.220509523" Feb 19 08:20:29 crc kubenswrapper[5023]: I0219 08:20:29.546878 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.307413 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.435510 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.435597 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.435668 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ddh5\" (UniqueName: \"kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.435892 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436028 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436054 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436123 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd\") pod \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\" (UID: \"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53\") " Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436209 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436768 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.436792 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.441269 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5" (OuterVolumeSpecName: "kube-api-access-6ddh5") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "kube-api-access-6ddh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.445687 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts" (OuterVolumeSpecName: "scripts") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.460815 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.501104 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.527333 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data" (OuterVolumeSpecName: "config-data") pod "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" (UID: "01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538484 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538512 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538532 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538545 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538556 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.538566 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ddh5\" (UniqueName: \"kubernetes.io/projected/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53-kube-api-access-6ddh5\") on node \"crc\" DevicePath \"\"" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.556298 5023 generic.go:334] "Generic (PLEG): container finished" podID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerID="d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386" exitCode=0 Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.556385 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.556438 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerDied","Data":"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386"} Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.556473 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53","Type":"ContainerDied","Data":"664c025d9cb3fb65aef30263fbf828a2952b6d822ab843bfe728cb1da5c74b23"} Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.556496 5023 scope.go:117] "RemoveContainer" containerID="6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.578786 5023 scope.go:117] "RemoveContainer" containerID="d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.592393 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.600751 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.610541 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.610900 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-central-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.610924 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-central-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.610941 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="proxy-httpd" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.610949 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="proxy-httpd" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.610968 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="sg-core" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.610975 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="sg-core" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.610984 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-notification-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.610991 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-notification-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.611127 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-notification-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.611137 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="sg-core" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.611146 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="proxy-httpd" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.611159 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" containerName="ceilometer-central-agent" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.612155 5023 scope.go:117] "RemoveContainer" containerID="d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.612510 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.614710 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.618720 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.618915 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.650680 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.651561 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.652278 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.652612 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.652806 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.653382 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.653868 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.654009 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5bg\" (UniqueName: \"kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.654872 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.659913 5023 scope.go:117] "RemoveContainer" containerID="bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.681131 5023 scope.go:117] "RemoveContainer" containerID="6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.681797 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315\": container with ID starting with 6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315 not found: ID does not exist" containerID="6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.681848 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315"} err="failed to get container status \"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315\": rpc error: code = NotFound desc = could not find container \"6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315\": container with ID starting with 6d699d422d1fb231f69a88dfb8ccec789e903c6a504e15dc8ff8ec3eb1570315 not found: ID does not exist" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.681881 5023 scope.go:117] "RemoveContainer" containerID="d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.682202 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea\": container with ID starting with d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea not found: ID does not exist" containerID="d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.682242 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea"} err="failed to get container status \"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea\": rpc error: code = NotFound desc = could not find container \"d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea\": container with ID starting with d9e02e960873cb1268b12d32476be7d90114e61071b8d20707fe4c28edcfc3ea not found: ID does not exist" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.682268 5023 scope.go:117] "RemoveContainer" containerID="d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.682526 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386\": container with ID starting with d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386 not found: ID does not exist" containerID="d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.682557 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386"} err="failed to get container status \"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386\": rpc error: code = NotFound desc = could not find container \"d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386\": container with ID starting with d2c6e241e1fbbeda9bf8c4a8da582e59177fd45fe4d9a495dadd8738a7f30386 not found: ID does not exist" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.682606 5023 scope.go:117] "RemoveContainer" containerID="bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a" Feb 19 08:20:30 crc kubenswrapper[5023]: E0219 08:20:30.682832 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a\": container with ID starting with bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a not found: ID does not exist" containerID="bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.682855 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a"} err="failed to get container status \"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a\": rpc error: code = NotFound desc = could not find container \"bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a\": container with ID starting with bed7a388c5cb6bea4adeb813ff2f074b2e946090265bfb20b2e2114dde574b0a not found: ID does not exist" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754463 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754512 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5bg\" (UniqueName: \"kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754560 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754592 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754631 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754659 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754680 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754703 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.754963 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.755062 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.758797 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.759154 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.759360 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.760705 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.760771 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.772689 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5bg\" (UniqueName: \"kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg\") pod \"ceilometer-0\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:30 crc kubenswrapper[5023]: I0219 08:20:30.958680 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:31 crc kubenswrapper[5023]: I0219 08:20:31.398911 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:20:31 crc kubenswrapper[5023]: W0219 08:20:31.403083 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023485c0_a529_4921_a1ee_69ed5651880f.slice/crio-6593a5f0446626d83d525cbf6250d2aaa77c8bb10b49e5e2c493bf246003e0c8 WatchSource:0}: Error finding container 6593a5f0446626d83d525cbf6250d2aaa77c8bb10b49e5e2c493bf246003e0c8: Status 404 returned error can't find the container with id 6593a5f0446626d83d525cbf6250d2aaa77c8bb10b49e5e2c493bf246003e0c8 Feb 19 08:20:31 crc kubenswrapper[5023]: I0219 08:20:31.488603 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53" path="/var/lib/kubelet/pods/01c2a2ee-3bfb-4d37-85f8-fb3c3f9ecf53/volumes" Feb 19 08:20:31 crc kubenswrapper[5023]: I0219 08:20:31.575397 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerStarted","Data":"6593a5f0446626d83d525cbf6250d2aaa77c8bb10b49e5e2c493bf246003e0c8"} Feb 19 08:20:32 crc kubenswrapper[5023]: I0219 08:20:32.589741 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerStarted","Data":"3332fd7f128cf4768a543f1c5d73c7a211870a81f0dfb1c704d165e8213cf8cf"} Feb 19 08:20:33 crc kubenswrapper[5023]: I0219 08:20:33.600128 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerStarted","Data":"f826272119e8a9d4917a4598e084b2ef27eb00bb37ffa5bacdbdfbb4582da965"} Feb 19 08:20:33 crc kubenswrapper[5023]: I0219 08:20:33.600443 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerStarted","Data":"62f41e4619c094020c3a347225080de75b48da9d56c9f2609ded151e1651460c"} Feb 19 08:20:35 crc kubenswrapper[5023]: I0219 08:20:35.624395 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerStarted","Data":"6eddad2a1559334be4624a5720ae12b9a7bb2a68d77e0b3fac7959431f5dcf9c"} Feb 19 08:20:35 crc kubenswrapper[5023]: I0219 08:20:35.624846 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:20:35 crc kubenswrapper[5023]: I0219 08:20:35.651193 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.386393057 podStartE2EDuration="5.651167523s" podCreationTimestamp="2026-02-19 08:20:30 +0000 UTC" firstStartedPulling="2026-02-19 08:20:31.410931987 +0000 UTC m=+1189.068050935" lastFinishedPulling="2026-02-19 08:20:34.675706453 +0000 UTC m=+1192.332825401" observedRunningTime="2026-02-19 08:20:35.64197951 +0000 UTC m=+1193.299098458" watchObservedRunningTime="2026-02-19 08:20:35.651167523 +0000 UTC m=+1193.308286471" Feb 19 08:20:37 crc kubenswrapper[5023]: I0219 08:20:37.494512 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 19 08:20:41 crc kubenswrapper[5023]: I0219 08:20:41.870759 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:20:41 crc kubenswrapper[5023]: I0219 08:20:41.871531 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:21:00 crc kubenswrapper[5023]: I0219 08:21:00.965992 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.913067 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr"] Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.914659 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.916823 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.920457 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-xwcrt"] Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.921596 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.921741 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.921785 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.927750 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xwcrt"] Feb 19 08:21:05 crc kubenswrapper[5023]: I0219 08:21:05.934745 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr"] Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.023178 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.023253 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzctw\" (UniqueName: \"kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.023335 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.023361 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.024343 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.047381 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9\") pod \"watcher-60f2-account-create-update-pg8xr\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.125538 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzctw\" (UniqueName: \"kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.125668 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.126404 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.144964 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzctw\" (UniqueName: \"kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw\") pod \"watcher-db-create-xwcrt\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.233111 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.244107 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.550135 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xwcrt"] Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.723718 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr"] Feb 19 08:21:06 crc kubenswrapper[5023]: W0219 08:21:06.727143 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c398db_8586_4cff_a9cf_0b61425ff87f.slice/crio-12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52 WatchSource:0}: Error finding container 12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52: Status 404 returned error can't find the container with id 12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52 Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.895445 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" event={"ID":"38c398db-8586-4cff-a9cf-0b61425ff87f","Type":"ContainerStarted","Data":"1ca6c1993a5683d8a5908d428f4943a4dd6c76d84bfd17392c518c29d9e7c4a0"} Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.895496 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" event={"ID":"38c398db-8586-4cff-a9cf-0b61425ff87f","Type":"ContainerStarted","Data":"12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52"} Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.896589 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xwcrt" event={"ID":"069710ef-dc0f-4e31-a6e0-72bd60aaa878","Type":"ContainerStarted","Data":"aa539d8ae370b06bce71d1638a64a6d4fefc06e4f716f1c53cbb6346fe82ecfb"} Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.896663 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xwcrt" event={"ID":"069710ef-dc0f-4e31-a6e0-72bd60aaa878","Type":"ContainerStarted","Data":"aa7f26bf359227a99cdff3620a3a37590bfc97b48f1063f0faf301fb2997583d"} Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.914482 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" podStartSLOduration=1.914462145 podStartE2EDuration="1.914462145s" podCreationTimestamp="2026-02-19 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:21:06.908716993 +0000 UTC m=+1224.565835941" watchObservedRunningTime="2026-02-19 08:21:06.914462145 +0000 UTC m=+1224.571581093" Feb 19 08:21:06 crc kubenswrapper[5023]: I0219 08:21:06.927502 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-xwcrt" podStartSLOduration=1.92747957 podStartE2EDuration="1.92747957s" podCreationTimestamp="2026-02-19 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:21:06.922388255 +0000 UTC m=+1224.579507203" watchObservedRunningTime="2026-02-19 08:21:06.92747957 +0000 UTC m=+1224.584598518" Feb 19 08:21:07 crc kubenswrapper[5023]: I0219 08:21:07.908460 5023 generic.go:334] "Generic (PLEG): container finished" podID="38c398db-8586-4cff-a9cf-0b61425ff87f" containerID="1ca6c1993a5683d8a5908d428f4943a4dd6c76d84bfd17392c518c29d9e7c4a0" exitCode=0 Feb 19 08:21:07 crc kubenswrapper[5023]: I0219 08:21:07.908654 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" event={"ID":"38c398db-8586-4cff-a9cf-0b61425ff87f","Type":"ContainerDied","Data":"1ca6c1993a5683d8a5908d428f4943a4dd6c76d84bfd17392c518c29d9e7c4a0"} Feb 19 08:21:07 crc kubenswrapper[5023]: I0219 08:21:07.911146 5023 generic.go:334] "Generic (PLEG): container finished" podID="069710ef-dc0f-4e31-a6e0-72bd60aaa878" containerID="aa539d8ae370b06bce71d1638a64a6d4fefc06e4f716f1c53cbb6346fe82ecfb" exitCode=0 Feb 19 08:21:07 crc kubenswrapper[5023]: I0219 08:21:07.911195 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xwcrt" event={"ID":"069710ef-dc0f-4e31-a6e0-72bd60aaa878","Type":"ContainerDied","Data":"aa539d8ae370b06bce71d1638a64a6d4fefc06e4f716f1c53cbb6346fe82ecfb"} Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.302922 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.309675 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.476869 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts\") pod \"38c398db-8586-4cff-a9cf-0b61425ff87f\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.477233 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9\") pod \"38c398db-8586-4cff-a9cf-0b61425ff87f\" (UID: \"38c398db-8586-4cff-a9cf-0b61425ff87f\") " Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.477361 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzctw\" (UniqueName: \"kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw\") pod \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.477485 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts\") pod \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\" (UID: \"069710ef-dc0f-4e31-a6e0-72bd60aaa878\") " Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.477961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38c398db-8586-4cff-a9cf-0b61425ff87f" (UID: "38c398db-8586-4cff-a9cf-0b61425ff87f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.478305 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "069710ef-dc0f-4e31-a6e0-72bd60aaa878" (UID: "069710ef-dc0f-4e31-a6e0-72bd60aaa878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.485871 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw" (OuterVolumeSpecName: "kube-api-access-nzctw") pod "069710ef-dc0f-4e31-a6e0-72bd60aaa878" (UID: "069710ef-dc0f-4e31-a6e0-72bd60aaa878"). InnerVolumeSpecName "kube-api-access-nzctw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.492832 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9" (OuterVolumeSpecName: "kube-api-access-bj4r9") pod "38c398db-8586-4cff-a9cf-0b61425ff87f" (UID: "38c398db-8586-4cff-a9cf-0b61425ff87f"). InnerVolumeSpecName "kube-api-access-bj4r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.579833 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c398db-8586-4cff-a9cf-0b61425ff87f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.579864 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/38c398db-8586-4cff-a9cf-0b61425ff87f-kube-api-access-bj4r9\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.579875 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzctw\" (UniqueName: \"kubernetes.io/projected/069710ef-dc0f-4e31-a6e0-72bd60aaa878-kube-api-access-nzctw\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.579887 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069710ef-dc0f-4e31-a6e0-72bd60aaa878-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.929198 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-xwcrt" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.929888 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-xwcrt" event={"ID":"069710ef-dc0f-4e31-a6e0-72bd60aaa878","Type":"ContainerDied","Data":"aa7f26bf359227a99cdff3620a3a37590bfc97b48f1063f0faf301fb2997583d"} Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.929929 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7f26bf359227a99cdff3620a3a37590bfc97b48f1063f0faf301fb2997583d" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.931354 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" event={"ID":"38c398db-8586-4cff-a9cf-0b61425ff87f","Type":"ContainerDied","Data":"12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52"} Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.931376 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12aa5b19ef4449b40b6ac0937fc51f293dd13d1f12c1227747741bf182ed3f52" Feb 19 08:21:09 crc kubenswrapper[5023]: I0219 08:21:09.931405 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.254594 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz"] Feb 19 08:21:11 crc kubenswrapper[5023]: E0219 08:21:11.256244 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c398db-8586-4cff-a9cf-0b61425ff87f" containerName="mariadb-account-create-update" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.256323 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c398db-8586-4cff-a9cf-0b61425ff87f" containerName="mariadb-account-create-update" Feb 19 08:21:11 crc kubenswrapper[5023]: E0219 08:21:11.256392 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069710ef-dc0f-4e31-a6e0-72bd60aaa878" containerName="mariadb-database-create" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.256444 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="069710ef-dc0f-4e31-a6e0-72bd60aaa878" containerName="mariadb-database-create" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.256707 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="069710ef-dc0f-4e31-a6e0-72bd60aaa878" containerName="mariadb-database-create" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.256832 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c398db-8586-4cff-a9cf-0b61425ff87f" containerName="mariadb-account-create-update" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.257437 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.267194 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.267196 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-4l5d8" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.268163 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz"] Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.310404 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.310494 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.310517 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmp6c\" (UniqueName: \"kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.310544 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.412341 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.412774 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmp6c\" (UniqueName: \"kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.413020 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.413307 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.419188 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.429212 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.429355 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.430325 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmp6c\" (UniqueName: \"kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c\") pod \"watcher-kuttl-db-sync-8pgtz\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.581238 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.872763 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:21:11 crc kubenswrapper[5023]: I0219 08:21:11.873179 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:21:12 crc kubenswrapper[5023]: I0219 08:21:12.104840 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz"] Feb 19 08:21:12 crc kubenswrapper[5023]: I0219 08:21:12.107929 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:21:12 crc kubenswrapper[5023]: I0219 08:21:12.959372 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" event={"ID":"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb","Type":"ContainerStarted","Data":"cb5922a046d14c51c6159cde82f2c62c7ca5301f32ff57867c0ea5ff64211eda"} Feb 19 08:21:18 crc kubenswrapper[5023]: I0219 08:21:18.764642 5023 scope.go:117] "RemoveContainer" containerID="3bfb08e07fb12b59401e60326f73f450324558b92c36187a92af5861612e46b4" Feb 19 08:21:28 crc kubenswrapper[5023]: E0219 08:21:28.257264 5023 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Feb 19 08:21:28 crc kubenswrapper[5023]: E0219 08:21:28.257806 5023 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Feb 19 08:21:28 crc kubenswrapper[5023]: E0219 08:21:28.257933 5023 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.194:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmp6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-8pgtz_watcher-kuttl-default(fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 08:21:28 crc kubenswrapper[5023]: E0219 08:21:28.259152 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" Feb 19 08:21:29 crc kubenswrapper[5023]: E0219 08:21:29.111676 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.194:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" Feb 19 08:21:41 crc kubenswrapper[5023]: I0219 08:21:41.870513 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:21:41 crc kubenswrapper[5023]: I0219 08:21:41.871361 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:21:41 crc kubenswrapper[5023]: I0219 08:21:41.871415 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:21:41 crc kubenswrapper[5023]: I0219 08:21:41.872195 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:21:41 crc kubenswrapper[5023]: I0219 08:21:41.872255 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf" gracePeriod=600 Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.219222 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf" exitCode=0 Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.219436 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf"} Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.219817 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6"} Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.219896 5023 scope.go:117] "RemoveContainer" containerID="650edbaf66bd4a3e9e9e9ff44722cf8acdf5b9eac44eb0f6a93249eddba0373f" Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.222792 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" event={"ID":"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb","Type":"ContainerStarted","Data":"e78831a1c8143bc0c339a21f8d922671ae004379833f838c88187841b1f12ff6"} Feb 19 08:21:42 crc kubenswrapper[5023]: I0219 08:21:42.260209 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" podStartSLOduration=1.8191661890000002 podStartE2EDuration="31.26018096s" podCreationTimestamp="2026-02-19 08:21:11 +0000 UTC" firstStartedPulling="2026-02-19 08:21:12.10768954 +0000 UTC m=+1229.764808488" lastFinishedPulling="2026-02-19 08:21:41.548704311 +0000 UTC m=+1259.205823259" observedRunningTime="2026-02-19 08:21:42.259928253 +0000 UTC m=+1259.917047201" watchObservedRunningTime="2026-02-19 08:21:42.26018096 +0000 UTC m=+1259.917299908" Feb 19 08:21:46 crc kubenswrapper[5023]: I0219 08:21:46.263659 5023 generic.go:334] "Generic (PLEG): container finished" podID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" containerID="e78831a1c8143bc0c339a21f8d922671ae004379833f838c88187841b1f12ff6" exitCode=0 Feb 19 08:21:46 crc kubenswrapper[5023]: I0219 08:21:46.263735 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" event={"ID":"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb","Type":"ContainerDied","Data":"e78831a1c8143bc0c339a21f8d922671ae004379833f838c88187841b1f12ff6"} Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.590183 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.674936 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data\") pod \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.675219 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle\") pod \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.675332 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data\") pod \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.675447 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmp6c\" (UniqueName: \"kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c\") pod \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\" (UID: \"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb\") " Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.680813 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" (UID: "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.680840 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c" (OuterVolumeSpecName: "kube-api-access-xmp6c") pod "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" (UID: "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb"). InnerVolumeSpecName "kube-api-access-xmp6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.714052 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" (UID: "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.726572 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data" (OuterVolumeSpecName: "config-data") pod "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" (UID: "fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.777771 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.777825 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.777846 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:47 crc kubenswrapper[5023]: I0219 08:21:47.777863 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmp6c\" (UniqueName: \"kubernetes.io/projected/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb-kube-api-access-xmp6c\") on node \"crc\" DevicePath \"\"" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.286391 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" event={"ID":"fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb","Type":"ContainerDied","Data":"cb5922a046d14c51c6159cde82f2c62c7ca5301f32ff57867c0ea5ff64211eda"} Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.286427 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.286452 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb5922a046d14c51c6159cde82f2c62c7ca5301f32ff57867c0ea5ff64211eda" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.703647 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: E0219 08:21:48.704054 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" containerName="watcher-kuttl-db-sync" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.704071 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" containerName="watcher-kuttl-db-sync" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.704256 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" containerName="watcher-kuttl-db-sync" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.705198 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.709603 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.710648 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.713904 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-4l5d8" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.714127 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.714300 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.729175 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.757080 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.804942 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6fv7\" (UniqueName: \"kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805023 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805056 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805158 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805188 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805218 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fjc\" (UniqueName: \"kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805249 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805273 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.805307 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.859705 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.862000 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.883364 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906460 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906502 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906526 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fjc\" (UniqueName: \"kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906549 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906563 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906588 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906648 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6fv7\" (UniqueName: \"kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906681 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.906703 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.907045 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.910120 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.918108 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.918737 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.922308 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.923055 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.923430 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.924402 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.941306 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fjc\" (UniqueName: \"kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc\") pod \"watcher-kuttl-api-0\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:48 crc kubenswrapper[5023]: I0219 08:21:48.951343 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6fv7\" (UniqueName: \"kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7\") pod \"watcher-kuttl-applier-0\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.008449 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx5d7\" (UniqueName: \"kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.008571 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.008600 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.008671 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.008699 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.021292 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.038544 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.111182 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.111466 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.111496 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx5d7\" (UniqueName: \"kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.111562 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.111590 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.119690 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.119956 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.133539 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.149176 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx5d7\" (UniqueName: \"kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.152260 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.207228 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.524656 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.536989 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:21:49 crc kubenswrapper[5023]: I0219 08:21:49.660589 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:21:49 crc kubenswrapper[5023]: W0219 08:21:49.673081 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4cb1b4a_6289_4eef_8263_b9c37e537d6b.slice/crio-1444381f3ba2ce83a377f0fa2efc762691b7fbb813df82c7f70b3070d9e91427 WatchSource:0}: Error finding container 1444381f3ba2ce83a377f0fa2efc762691b7fbb813df82c7f70b3070d9e91427: Status 404 returned error can't find the container with id 1444381f3ba2ce83a377f0fa2efc762691b7fbb813df82c7f70b3070d9e91427 Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.309221 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"60feadb2-9033-4f1b-9f6d-99c5ddd03d25","Type":"ContainerStarted","Data":"92126945edc3708d132331bcff218cde822d6f42d6051b247e4a0c75ca6010d8"} Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.312152 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4cb1b4a-6289-4eef-8263-b9c37e537d6b","Type":"ContainerStarted","Data":"1444381f3ba2ce83a377f0fa2efc762691b7fbb813df82c7f70b3070d9e91427"} Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.314185 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerStarted","Data":"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81"} Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.314218 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerStarted","Data":"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab"} Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.314233 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerStarted","Data":"c3351587631cc5a4d9651bb43c59e7c69c9d72c19cf8b387db9b406c5161f02e"} Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.314993 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:50 crc kubenswrapper[5023]: I0219 08:21:50.343647 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.343629075 podStartE2EDuration="2.343629075s" podCreationTimestamp="2026-02-19 08:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:21:50.336172067 +0000 UTC m=+1267.993291015" watchObservedRunningTime="2026-02-19 08:21:50.343629075 +0000 UTC m=+1268.000748023" Feb 19 08:21:51 crc kubenswrapper[5023]: I0219 08:21:51.322091 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"60feadb2-9033-4f1b-9f6d-99c5ddd03d25","Type":"ContainerStarted","Data":"e51eeb61cc6cfb7d03b2fd3a934ce529c7ffc1bea60c83012d73fa552f4922e1"} Feb 19 08:21:51 crc kubenswrapper[5023]: I0219 08:21:51.323779 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4cb1b4a-6289-4eef-8263-b9c37e537d6b","Type":"ContainerStarted","Data":"29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd"} Feb 19 08:21:51 crc kubenswrapper[5023]: I0219 08:21:51.344898 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.210229342 podStartE2EDuration="3.34488153s" podCreationTimestamp="2026-02-19 08:21:48 +0000 UTC" firstStartedPulling="2026-02-19 08:21:49.547091514 +0000 UTC m=+1267.204210452" lastFinishedPulling="2026-02-19 08:21:50.681743692 +0000 UTC m=+1268.338862640" observedRunningTime="2026-02-19 08:21:51.338279066 +0000 UTC m=+1268.995398014" watchObservedRunningTime="2026-02-19 08:21:51.34488153 +0000 UTC m=+1269.002000478" Feb 19 08:21:51 crc kubenswrapper[5023]: I0219 08:21:51.361930 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.352888602 podStartE2EDuration="3.361911102s" podCreationTimestamp="2026-02-19 08:21:48 +0000 UTC" firstStartedPulling="2026-02-19 08:21:49.6756589 +0000 UTC m=+1267.332777848" lastFinishedPulling="2026-02-19 08:21:50.6846814 +0000 UTC m=+1268.341800348" observedRunningTime="2026-02-19 08:21:51.358097321 +0000 UTC m=+1269.015216269" watchObservedRunningTime="2026-02-19 08:21:51.361911102 +0000 UTC m=+1269.019030050" Feb 19 08:21:52 crc kubenswrapper[5023]: I0219 08:21:52.595973 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:54 crc kubenswrapper[5023]: I0219 08:21:54.022254 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:54 crc kubenswrapper[5023]: I0219 08:21:54.039793 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.021532 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.025676 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.039075 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.066158 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.208746 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.238540 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.381763 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.387292 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.405231 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:21:59 crc kubenswrapper[5023]: I0219 08:21:59.414887 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:01 crc kubenswrapper[5023]: I0219 08:22:01.638933 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:01 crc kubenswrapper[5023]: I0219 08:22:01.639409 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-central-agent" containerID="cri-o://3332fd7f128cf4768a543f1c5d73c7a211870a81f0dfb1c704d165e8213cf8cf" gracePeriod=30 Feb 19 08:22:01 crc kubenswrapper[5023]: I0219 08:22:01.639480 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="sg-core" containerID="cri-o://f826272119e8a9d4917a4598e084b2ef27eb00bb37ffa5bacdbdfbb4582da965" gracePeriod=30 Feb 19 08:22:01 crc kubenswrapper[5023]: I0219 08:22:01.639492 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-notification-agent" containerID="cri-o://62f41e4619c094020c3a347225080de75b48da9d56c9f2609ded151e1651460c" gracePeriod=30 Feb 19 08:22:01 crc kubenswrapper[5023]: I0219 08:22:01.639478 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="proxy-httpd" containerID="cri-o://6eddad2a1559334be4624a5720ae12b9a7bb2a68d77e0b3fac7959431f5dcf9c" gracePeriod=30 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.402923 5023 generic.go:334] "Generic (PLEG): container finished" podID="023485c0-a529-4921-a1ee-69ed5651880f" containerID="6eddad2a1559334be4624a5720ae12b9a7bb2a68d77e0b3fac7959431f5dcf9c" exitCode=0 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.403220 5023 generic.go:334] "Generic (PLEG): container finished" podID="023485c0-a529-4921-a1ee-69ed5651880f" containerID="f826272119e8a9d4917a4598e084b2ef27eb00bb37ffa5bacdbdfbb4582da965" exitCode=2 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.403230 5023 generic.go:334] "Generic (PLEG): container finished" podID="023485c0-a529-4921-a1ee-69ed5651880f" containerID="3332fd7f128cf4768a543f1c5d73c7a211870a81f0dfb1c704d165e8213cf8cf" exitCode=0 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.403101 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerDied","Data":"6eddad2a1559334be4624a5720ae12b9a7bb2a68d77e0b3fac7959431f5dcf9c"} Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.403276 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerDied","Data":"f826272119e8a9d4917a4598e084b2ef27eb00bb37ffa5bacdbdfbb4582da965"} Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.403291 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerDied","Data":"3332fd7f128cf4768a543f1c5d73c7a211870a81f0dfb1c704d165e8213cf8cf"} Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.850153 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.850357 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" containerName="watcher-decision-engine" containerID="cri-o://e51eeb61cc6cfb7d03b2fd3a934ce529c7ffc1bea60c83012d73fa552f4922e1" gracePeriod=30 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.868168 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.868411 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-kuttl-api-log" containerID="cri-o://ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab" gracePeriod=30 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.868506 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-api" containerID="cri-o://c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81" gracePeriod=30 Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.885337 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:02 crc kubenswrapper[5023]: I0219 08:22:02.885666 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerName="watcher-applier" containerID="cri-o://29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" gracePeriod=30 Feb 19 08:22:03 crc kubenswrapper[5023]: I0219 08:22:03.412236 5023 generic.go:334] "Generic (PLEG): container finished" podID="f683bb9a-6a58-4af1-840c-844530b3a067" containerID="ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab" exitCode=143 Feb 19 08:22:03 crc kubenswrapper[5023]: I0219 08:22:03.412523 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerDied","Data":"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab"} Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.052315 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.053491 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.054754 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.054797 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerName="watcher-applier" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.209044 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262081 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs\") pod \"f683bb9a-6a58-4af1-840c-844530b3a067\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262160 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca\") pod \"f683bb9a-6a58-4af1-840c-844530b3a067\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262198 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle\") pod \"f683bb9a-6a58-4af1-840c-844530b3a067\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262443 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8fjc\" (UniqueName: \"kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc\") pod \"f683bb9a-6a58-4af1-840c-844530b3a067\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262566 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data\") pod \"f683bb9a-6a58-4af1-840c-844530b3a067\" (UID: \"f683bb9a-6a58-4af1-840c-844530b3a067\") " Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.262701 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs" (OuterVolumeSpecName: "logs") pod "f683bb9a-6a58-4af1-840c-844530b3a067" (UID: "f683bb9a-6a58-4af1-840c-844530b3a067"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.263185 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f683bb9a-6a58-4af1-840c-844530b3a067-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.288076 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc" (OuterVolumeSpecName: "kube-api-access-n8fjc") pod "f683bb9a-6a58-4af1-840c-844530b3a067" (UID: "f683bb9a-6a58-4af1-840c-844530b3a067"). InnerVolumeSpecName "kube-api-access-n8fjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.294760 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f683bb9a-6a58-4af1-840c-844530b3a067" (UID: "f683bb9a-6a58-4af1-840c-844530b3a067"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.325004 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f683bb9a-6a58-4af1-840c-844530b3a067" (UID: "f683bb9a-6a58-4af1-840c-844530b3a067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.345745 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data" (OuterVolumeSpecName: "config-data") pod "f683bb9a-6a58-4af1-840c-844530b3a067" (UID: "f683bb9a-6a58-4af1-840c-844530b3a067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.367486 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.367527 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.367538 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f683bb9a-6a58-4af1-840c-844530b3a067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.367549 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8fjc\" (UniqueName: \"kubernetes.io/projected/f683bb9a-6a58-4af1-840c-844530b3a067-kube-api-access-n8fjc\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.422822 5023 generic.go:334] "Generic (PLEG): container finished" podID="f683bb9a-6a58-4af1-840c-844530b3a067" containerID="c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81" exitCode=0 Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.422874 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerDied","Data":"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81"} Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.422884 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.422908 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f683bb9a-6a58-4af1-840c-844530b3a067","Type":"ContainerDied","Data":"c3351587631cc5a4d9651bb43c59e7c69c9d72c19cf8b387db9b406c5161f02e"} Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.422929 5023 scope.go:117] "RemoveContainer" containerID="c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.456416 5023 scope.go:117] "RemoveContainer" containerID="ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.469190 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.480020 5023 scope.go:117] "RemoveContainer" containerID="c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81" Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.480919 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81\": container with ID starting with c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81 not found: ID does not exist" containerID="c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.480982 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81"} err="failed to get container status \"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81\": rpc error: code = NotFound desc = could not find container \"c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81\": container with ID starting with c12fb6bb439e5488e0aab185d86bb4242997d91897df1d6127cc2f2a13ca9b81 not found: ID does not exist" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.481016 5023 scope.go:117] "RemoveContainer" containerID="ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab" Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.481350 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab\": container with ID starting with ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab not found: ID does not exist" containerID="ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.481371 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab"} err="failed to get container status \"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab\": rpc error: code = NotFound desc = could not find container \"ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab\": container with ID starting with ebe0048e35b4a5349047ea2ed4f9d6d2b547cf785712cf51ea79c8073c2ec1ab not found: ID does not exist" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.484330 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.504661 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.505046 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-api" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.505063 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-api" Feb 19 08:22:04 crc kubenswrapper[5023]: E0219 08:22:04.505083 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-kuttl-api-log" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.505090 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-kuttl-api-log" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.505247 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-api" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.505267 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-kuttl-api-log" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.506137 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.508476 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.517077 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.571965 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxjr\" (UniqueName: \"kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.572102 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.572186 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.572587 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.572685 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.674649 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.674711 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.674739 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.674779 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxjr\" (UniqueName: \"kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.674851 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.675210 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.678986 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.679182 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.679717 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.698986 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxjr\" (UniqueName: \"kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr\") pod \"watcher-kuttl-api-0\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:04 crc kubenswrapper[5023]: I0219 08:22:04.821327 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:05 crc kubenswrapper[5023]: I0219 08:22:05.290186 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:05 crc kubenswrapper[5023]: I0219 08:22:05.445577 5023 generic.go:334] "Generic (PLEG): container finished" podID="023485c0-a529-4921-a1ee-69ed5651880f" containerID="62f41e4619c094020c3a347225080de75b48da9d56c9f2609ded151e1651460c" exitCode=0 Feb 19 08:22:05 crc kubenswrapper[5023]: I0219 08:22:05.445666 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerDied","Data":"62f41e4619c094020c3a347225080de75b48da9d56c9f2609ded151e1651460c"} Feb 19 08:22:05 crc kubenswrapper[5023]: I0219 08:22:05.447182 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerStarted","Data":"3d3ace36bffaa3d57acaac706f9eabd687cb66c91b527ff02daf81b2f95c612d"} Feb 19 08:22:05 crc kubenswrapper[5023]: I0219 08:22:05.487179 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" path="/var/lib/kubelet/pods/f683bb9a-6a58-4af1-840c-844530b3a067/volumes" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.224801 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.301891 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.301944 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302005 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302023 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5bg\" (UniqueName: \"kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302068 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302096 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302189 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.302213 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle\") pod \"023485c0-a529-4921-a1ee-69ed5651880f\" (UID: \"023485c0-a529-4921-a1ee-69ed5651880f\") " Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.307022 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg" (OuterVolumeSpecName: "kube-api-access-xh5bg") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "kube-api-access-xh5bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.308083 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.309706 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.323794 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts" (OuterVolumeSpecName: "scripts") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.352692 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.390863 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.397366 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405721 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405750 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405761 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405772 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405781 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405794 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5bg\" (UniqueName: \"kubernetes.io/projected/023485c0-a529-4921-a1ee-69ed5651880f-kube-api-access-xh5bg\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.405802 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/023485c0-a529-4921-a1ee-69ed5651880f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.418374 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data" (OuterVolumeSpecName: "config-data") pod "023485c0-a529-4921-a1ee-69ed5651880f" (UID: "023485c0-a529-4921-a1ee-69ed5651880f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.472654 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"023485c0-a529-4921-a1ee-69ed5651880f","Type":"ContainerDied","Data":"6593a5f0446626d83d525cbf6250d2aaa77c8bb10b49e5e2c493bf246003e0c8"} Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.472715 5023 scope.go:117] "RemoveContainer" containerID="6eddad2a1559334be4624a5720ae12b9a7bb2a68d77e0b3fac7959431f5dcf9c" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.472845 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.485640 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerStarted","Data":"e0b69cc32bed5c5efc3ae9e7d09fa9a33e3e72bf12c5234672652b1d9b4a3444"} Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.485687 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerStarted","Data":"5b9d63921e1fef29d6a528b1a7c13b0935bd47a4f627320143bad8275bfd7e3e"} Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.486945 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.507306 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023485c0-a529-4921-a1ee-69ed5651880f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.511222 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.511199026 podStartE2EDuration="2.511199026s" podCreationTimestamp="2026-02-19 08:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:06.509013568 +0000 UTC m=+1284.166132526" watchObservedRunningTime="2026-02-19 08:22:06.511199026 +0000 UTC m=+1284.168317974" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.515730 5023 scope.go:117] "RemoveContainer" containerID="f826272119e8a9d4917a4598e084b2ef27eb00bb37ffa5bacdbdfbb4582da965" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.536408 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.539103 5023 scope.go:117] "RemoveContainer" containerID="62f41e4619c094020c3a347225080de75b48da9d56c9f2609ded151e1651460c" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.550974 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.568595 5023 scope.go:117] "RemoveContainer" containerID="3332fd7f128cf4768a543f1c5d73c7a211870a81f0dfb1c704d165e8213cf8cf" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.572823 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:06 crc kubenswrapper[5023]: E0219 08:22:06.573157 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="proxy-httpd" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573175 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="proxy-httpd" Feb 19 08:22:06 crc kubenswrapper[5023]: E0219 08:22:06.573185 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-central-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573192 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-central-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: E0219 08:22:06.573199 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="sg-core" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573208 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="sg-core" Feb 19 08:22:06 crc kubenswrapper[5023]: E0219 08:22:06.573232 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-notification-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573237 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-notification-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573854 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="proxy-httpd" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.573963 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-notification-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.574040 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="sg-core" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.574132 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="023485c0-a529-4921-a1ee-69ed5651880f" containerName="ceilometer-central-agent" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.593237 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.593347 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.598194 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.598404 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.598534 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.709793 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.709851 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.709885 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.710019 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.710081 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.710214 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.710303 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.710351 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcrw\" (UniqueName: \"kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812502 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812569 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812590 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812610 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812853 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwcrw\" (UniqueName: \"kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812893 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.812930 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.813344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.813359 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.813659 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.817478 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.817825 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.817973 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.818410 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.818526 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.831885 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwcrw\" (UniqueName: \"kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw\") pod \"ceilometer-0\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:06 crc kubenswrapper[5023]: I0219 08:22:06.916871 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.388528 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.529941 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023485c0-a529-4921-a1ee-69ed5651880f" path="/var/lib/kubelet/pods/023485c0-a529-4921-a1ee-69ed5651880f/volumes" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.531290 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerStarted","Data":"e5f339c74bfb8fd88357350a240d4607f3e7ab5bf09a44c0224d730910644119"} Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.538094 5023 generic.go:334] "Generic (PLEG): container finished" podID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerID="29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" exitCode=0 Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.538237 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4cb1b4a-6289-4eef-8263-b9c37e537d6b","Type":"ContainerDied","Data":"29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd"} Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.675727 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.744120 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6fv7\" (UniqueName: \"kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7\") pod \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.744287 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle\") pod \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.744349 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs\") pod \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.744397 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data\") pod \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\" (UID: \"d4cb1b4a-6289-4eef-8263-b9c37e537d6b\") " Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.755025 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs" (OuterVolumeSpecName: "logs") pod "d4cb1b4a-6289-4eef-8263-b9c37e537d6b" (UID: "d4cb1b4a-6289-4eef-8263-b9c37e537d6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.756363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7" (OuterVolumeSpecName: "kube-api-access-r6fv7") pod "d4cb1b4a-6289-4eef-8263-b9c37e537d6b" (UID: "d4cb1b4a-6289-4eef-8263-b9c37e537d6b"). InnerVolumeSpecName "kube-api-access-r6fv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.785027 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4cb1b4a-6289-4eef-8263-b9c37e537d6b" (UID: "d4cb1b4a-6289-4eef-8263-b9c37e537d6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.797336 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data" (OuterVolumeSpecName: "config-data") pod "d4cb1b4a-6289-4eef-8263-b9c37e537d6b" (UID: "d4cb1b4a-6289-4eef-8263-b9c37e537d6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.845929 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.845971 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6fv7\" (UniqueName: \"kubernetes.io/projected/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-kube-api-access-r6fv7\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.845981 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:07 crc kubenswrapper[5023]: I0219 08:22:07.845991 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4cb1b4a-6289-4eef-8263-b9c37e537d6b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.555960 5023 generic.go:334] "Generic (PLEG): container finished" podID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" containerID="e51eeb61cc6cfb7d03b2fd3a934ce529c7ffc1bea60c83012d73fa552f4922e1" exitCode=0 Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.556023 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"60feadb2-9033-4f1b-9f6d-99c5ddd03d25","Type":"ContainerDied","Data":"e51eeb61cc6cfb7d03b2fd3a934ce529c7ffc1bea60c83012d73fa552f4922e1"} Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.559163 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.559690 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.559721 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4cb1b4a-6289-4eef-8263-b9c37e537d6b","Type":"ContainerDied","Data":"1444381f3ba2ce83a377f0fa2efc762691b7fbb813df82c7f70b3070d9e91427"} Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.559810 5023 scope.go:117] "RemoveContainer" containerID="29dfacd443b0f9d99c82cdef64c6dff421b5e43e786a56e7eb4bda60556139dd" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.590272 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.600049 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.607512 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.645762 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:08 crc kubenswrapper[5023]: E0219 08:22:08.646227 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerName="watcher-applier" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.646321 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerName="watcher-applier" Feb 19 08:22:08 crc kubenswrapper[5023]: E0219 08:22:08.646471 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" containerName="watcher-decision-engine" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.646554 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" containerName="watcher-decision-engine" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.646896 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" containerName="watcher-decision-engine" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.646972 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" containerName="watcher-applier" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.649160 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.653714 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.659096 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data\") pod \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.659196 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca\") pod \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.659237 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs\") pod \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.659270 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle\") pod \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.659390 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx5d7\" (UniqueName: \"kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7\") pod \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\" (UID: \"60feadb2-9033-4f1b-9f6d-99c5ddd03d25\") " Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.666278 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs" (OuterVolumeSpecName: "logs") pod "60feadb2-9033-4f1b-9f6d-99c5ddd03d25" (UID: "60feadb2-9033-4f1b-9f6d-99c5ddd03d25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.666883 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7" (OuterVolumeSpecName: "kube-api-access-vx5d7") pod "60feadb2-9033-4f1b-9f6d-99c5ddd03d25" (UID: "60feadb2-9033-4f1b-9f6d-99c5ddd03d25"). InnerVolumeSpecName "kube-api-access-vx5d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.671721 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.710759 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60feadb2-9033-4f1b-9f6d-99c5ddd03d25" (UID: "60feadb2-9033-4f1b-9f6d-99c5ddd03d25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.725068 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "60feadb2-9033-4f1b-9f6d-99c5ddd03d25" (UID: "60feadb2-9033-4f1b-9f6d-99c5ddd03d25"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.763276 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data" (OuterVolumeSpecName: "config-data") pod "60feadb2-9033-4f1b-9f6d-99c5ddd03d25" (UID: "60feadb2-9033-4f1b-9f6d-99c5ddd03d25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.768109 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.768308 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.768489 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.768517 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpmz\" (UniqueName: \"kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.769049 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx5d7\" (UniqueName: \"kubernetes.io/projected/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-kube-api-access-vx5d7\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.769068 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.769078 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.769087 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.769097 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60feadb2-9033-4f1b-9f6d-99c5ddd03d25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.870207 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.870393 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.870470 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gpmz\" (UniqueName: \"kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.870572 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.873020 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.874239 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.883314 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.890092 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gpmz\" (UniqueName: \"kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz\") pod \"watcher-kuttl-applier-0\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:08 crc kubenswrapper[5023]: I0219 08:22:08.982908 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.023478 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.023811 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f683bb9a-6a58-4af1-840c-844530b3a067" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.211232 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.439324 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.493221 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cb1b4a-6289-4eef-8263-b9c37e537d6b" path="/var/lib/kubelet/pods/d4cb1b4a-6289-4eef-8263-b9c37e537d6b/volumes" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.567000 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"373e7646-81da-4cb4-86e1-3b244b7301bc","Type":"ContainerStarted","Data":"7ee3a535c38c194888cc02efa29b195737817a5443a7627de97d959848f3052d"} Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.568603 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"60feadb2-9033-4f1b-9f6d-99c5ddd03d25","Type":"ContainerDied","Data":"92126945edc3708d132331bcff218cde822d6f42d6051b247e4a0c75ca6010d8"} Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.568946 5023 scope.go:117] "RemoveContainer" containerID="e51eeb61cc6cfb7d03b2fd3a934ce529c7ffc1bea60c83012d73fa552f4922e1" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.568775 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.576380 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerStarted","Data":"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b"} Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.641676 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.654328 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.678753 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.696143 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.705448 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.721045 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.822245 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.827397 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwmm\" (UniqueName: \"kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.827452 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.827477 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.827514 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.827582 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.928681 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.928788 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwmm\" (UniqueName: \"kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.928834 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.928874 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:09 crc kubenswrapper[5023]: I0219 08:22:09.928926 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.045707 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.057018 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.059706 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwmm\" (UniqueName: \"kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.059918 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.060342 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.257766 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.615652 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"373e7646-81da-4cb4-86e1-3b244b7301bc","Type":"ContainerStarted","Data":"e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784"} Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.640360 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.640338565 podStartE2EDuration="2.640338565s" podCreationTimestamp="2026-02-19 08:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:10.63298062 +0000 UTC m=+1288.290099568" watchObservedRunningTime="2026-02-19 08:22:10.640338565 +0000 UTC m=+1288.297457513" Feb 19 08:22:10 crc kubenswrapper[5023]: I0219 08:22:10.816290 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:10 crc kubenswrapper[5023]: W0219 08:22:10.829993 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9cc5c6_0f2e_47f4_8ce9_92456e859e49.slice/crio-7413b3512fa18ed360cd6868801650dde1885563f0c7d525e188fb9952f5b17a WatchSource:0}: Error finding container 7413b3512fa18ed360cd6868801650dde1885563f0c7d525e188fb9952f5b17a: Status 404 returned error can't find the container with id 7413b3512fa18ed360cd6868801650dde1885563f0c7d525e188fb9952f5b17a Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.485858 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60feadb2-9033-4f1b-9f6d-99c5ddd03d25" path="/var/lib/kubelet/pods/60feadb2-9033-4f1b-9f6d-99c5ddd03d25/volumes" Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.629102 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49","Type":"ContainerStarted","Data":"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e"} Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.629151 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49","Type":"ContainerStarted","Data":"7413b3512fa18ed360cd6868801650dde1885563f0c7d525e188fb9952f5b17a"} Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.631394 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerStarted","Data":"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14"} Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.631431 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerStarted","Data":"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd"} Feb 19 08:22:11 crc kubenswrapper[5023]: I0219 08:22:11.652827 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.652808896 podStartE2EDuration="2.652808896s" podCreationTimestamp="2026-02-19 08:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:11.646777816 +0000 UTC m=+1289.303896764" watchObservedRunningTime="2026-02-19 08:22:11.652808896 +0000 UTC m=+1289.309927834" Feb 19 08:22:13 crc kubenswrapper[5023]: I0219 08:22:13.656436 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerStarted","Data":"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d"} Feb 19 08:22:13 crc kubenswrapper[5023]: I0219 08:22:13.657342 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:13 crc kubenswrapper[5023]: I0219 08:22:13.983478 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:14 crc kubenswrapper[5023]: I0219 08:22:14.821721 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:14 crc kubenswrapper[5023]: I0219 08:22:14.836704 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:14 crc kubenswrapper[5023]: I0219 08:22:14.872029 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=3.316850113 podStartE2EDuration="8.872008649s" podCreationTimestamp="2026-02-19 08:22:06 +0000 UTC" firstStartedPulling="2026-02-19 08:22:07.405417696 +0000 UTC m=+1285.062536644" lastFinishedPulling="2026-02-19 08:22:12.960576192 +0000 UTC m=+1290.617695180" observedRunningTime="2026-02-19 08:22:13.693062387 +0000 UTC m=+1291.350181335" watchObservedRunningTime="2026-02-19 08:22:14.872008649 +0000 UTC m=+1292.529127617" Feb 19 08:22:15 crc kubenswrapper[5023]: I0219 08:22:15.678005 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:18 crc kubenswrapper[5023]: I0219 08:22:18.983832 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:19 crc kubenswrapper[5023]: I0219 08:22:19.008140 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:19 crc kubenswrapper[5023]: I0219 08:22:19.738229 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:20 crc kubenswrapper[5023]: I0219 08:22:20.258675 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:20 crc kubenswrapper[5023]: I0219 08:22:20.285186 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:20 crc kubenswrapper[5023]: I0219 08:22:20.723609 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:20 crc kubenswrapper[5023]: I0219 08:22:20.748356 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.902891 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.903566 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" containerName="watcher-decision-engine" containerID="cri-o://b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e" gracePeriod=30 Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.913775 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.914195 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerName="watcher-applier" containerID="cri-o://e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" gracePeriod=30 Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.926093 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.926579 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-kuttl-api-log" containerID="cri-o://5b9d63921e1fef29d6a528b1a7c13b0935bd47a4f627320143bad8275bfd7e3e" gracePeriod=30 Feb 19 08:22:26 crc kubenswrapper[5023]: I0219 08:22:26.926707 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-api" containerID="cri-o://e0b69cc32bed5c5efc3ae9e7d09fa9a33e3e72bf12c5234672652b1d9b4a3444" gracePeriod=30 Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.780093 5023 generic.go:334] "Generic (PLEG): container finished" podID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerID="e0b69cc32bed5c5efc3ae9e7d09fa9a33e3e72bf12c5234672652b1d9b4a3444" exitCode=0 Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.780768 5023 generic.go:334] "Generic (PLEG): container finished" podID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerID="5b9d63921e1fef29d6a528b1a7c13b0935bd47a4f627320143bad8275bfd7e3e" exitCode=143 Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.780326 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerDied","Data":"e0b69cc32bed5c5efc3ae9e7d09fa9a33e3e72bf12c5234672652b1d9b4a3444"} Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.780970 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerDied","Data":"5b9d63921e1fef29d6a528b1a7c13b0935bd47a4f627320143bad8275bfd7e3e"} Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.781070 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6","Type":"ContainerDied","Data":"3d3ace36bffaa3d57acaac706f9eabd687cb66c91b527ff02daf81b2f95c612d"} Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.781154 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3ace36bffaa3d57acaac706f9eabd687cb66c91b527ff02daf81b2f95c612d" Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.818137 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.978075 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle\") pod \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.978267 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca\") pod \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.978337 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data\") pod \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.978366 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs\") pod \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.978408 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxjr\" (UniqueName: \"kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr\") pod \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\" (UID: \"2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6\") " Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.979737 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs" (OuterVolumeSpecName: "logs") pod "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" (UID: "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:27 crc kubenswrapper[5023]: I0219 08:22:27.987780 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr" (OuterVolumeSpecName: "kube-api-access-qcxjr") pod "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" (UID: "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6"). InnerVolumeSpecName "kube-api-access-qcxjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.013024 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" (UID: "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.015502 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" (UID: "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.043185 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data" (OuterVolumeSpecName: "config-data") pod "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" (UID: "2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.080308 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.080342 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.080354 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxjr\" (UniqueName: \"kubernetes.io/projected/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-kube-api-access-qcxjr\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.080365 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.080375 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.787014 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.816719 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.824048 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.846949 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.847567 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-kuttl-api-log" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.847592 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-kuttl-api-log" Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.847607 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-api" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.847615 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-api" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.847833 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-kuttl-api-log" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.847870 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" containerName="watcher-api" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.848901 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.853086 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.869291 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.894710 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.894766 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmts7\" (UniqueName: \"kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.894825 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.894845 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.895065 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.985076 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.989599 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.991097 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:28 crc kubenswrapper[5023]: E0219 08:22:28.991174 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerName="watcher-applier" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.999488 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.999553 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.999603 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.999718 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:28 crc kubenswrapper[5023]: I0219 08:22:28.999745 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmts7\" (UniqueName: \"kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.000005 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.003683 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.004474 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.004493 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.019204 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmts7\" (UniqueName: \"kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7\") pod \"watcher-kuttl-api-0\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.189646 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.490053 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6" path="/var/lib/kubelet/pods/2f8d9b7a-8f88-40f4-a1b4-e69d3e9c47c6/volumes" Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.681269 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:29 crc kubenswrapper[5023]: W0219 08:22:29.691192 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b28d888_a104_4e22_ba05_989521181dcb.slice/crio-c46b1e7421c03f7965fc25067cbd1b812acd775cbe7b634c0cad7009e982d416 WatchSource:0}: Error finding container c46b1e7421c03f7965fc25067cbd1b812acd775cbe7b634c0cad7009e982d416: Status 404 returned error can't find the container with id c46b1e7421c03f7965fc25067cbd1b812acd775cbe7b634c0cad7009e982d416 Feb 19 08:22:29 crc kubenswrapper[5023]: I0219 08:22:29.795055 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerStarted","Data":"c46b1e7421c03f7965fc25067cbd1b812acd775cbe7b634c0cad7009e982d416"} Feb 19 08:22:30 crc kubenswrapper[5023]: I0219 08:22:30.804604 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerStarted","Data":"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390"} Feb 19 08:22:30 crc kubenswrapper[5023]: I0219 08:22:30.804977 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerStarted","Data":"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131"} Feb 19 08:22:30 crc kubenswrapper[5023]: I0219 08:22:30.804996 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:30 crc kubenswrapper[5023]: I0219 08:22:30.826711 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.826685 podStartE2EDuration="2.826685s" podCreationTimestamp="2026-02-19 08:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:30.825338254 +0000 UTC m=+1308.482457202" watchObservedRunningTime="2026-02-19 08:22:30.826685 +0000 UTC m=+1308.483803958" Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.826034 5023 generic.go:334] "Generic (PLEG): container finished" podID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerID="e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" exitCode=0 Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.827043 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"373e7646-81da-4cb4-86e1-3b244b7301bc","Type":"ContainerDied","Data":"e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784"} Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.914025 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.941132 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle\") pod \"373e7646-81da-4cb4-86e1-3b244b7301bc\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.941254 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data\") pod \"373e7646-81da-4cb4-86e1-3b244b7301bc\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.941285 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs\") pod \"373e7646-81da-4cb4-86e1-3b244b7301bc\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.941308 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpmz\" (UniqueName: \"kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz\") pod \"373e7646-81da-4cb4-86e1-3b244b7301bc\" (UID: \"373e7646-81da-4cb4-86e1-3b244b7301bc\") " Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.943018 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs" (OuterVolumeSpecName: "logs") pod "373e7646-81da-4cb4-86e1-3b244b7301bc" (UID: "373e7646-81da-4cb4-86e1-3b244b7301bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.947666 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz" (OuterVolumeSpecName: "kube-api-access-8gpmz") pod "373e7646-81da-4cb4-86e1-3b244b7301bc" (UID: "373e7646-81da-4cb4-86e1-3b244b7301bc"). InnerVolumeSpecName "kube-api-access-8gpmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.987978 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "373e7646-81da-4cb4-86e1-3b244b7301bc" (UID: "373e7646-81da-4cb4-86e1-3b244b7301bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:31 crc kubenswrapper[5023]: I0219 08:22:31.996481 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data" (OuterVolumeSpecName: "config-data") pod "373e7646-81da-4cb4-86e1-3b244b7301bc" (UID: "373e7646-81da-4cb4-86e1-3b244b7301bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.043105 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.043147 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7646-81da-4cb4-86e1-3b244b7301bc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.043163 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gpmz\" (UniqueName: \"kubernetes.io/projected/373e7646-81da-4cb4-86e1-3b244b7301bc-kube-api-access-8gpmz\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.043176 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7646-81da-4cb4-86e1-3b244b7301bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.852251 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"373e7646-81da-4cb4-86e1-3b244b7301bc","Type":"ContainerDied","Data":"7ee3a535c38c194888cc02efa29b195737817a5443a7627de97d959848f3052d"} Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.852336 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.852584 5023 scope.go:117] "RemoveContainer" containerID="e83b02928ec7dae2e40b99e212fcc6d964e6b353d9c23145ca070ec148f07784" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.900752 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.927943 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.937280 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:32 crc kubenswrapper[5023]: E0219 08:22:32.937784 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerName="watcher-applier" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.937803 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerName="watcher-applier" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.938019 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" containerName="watcher-applier" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.938761 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.943649 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:22:32 crc kubenswrapper[5023]: I0219 08:22:32.947005 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.057565 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.057655 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.057791 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.057840 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xksc4\" (UniqueName: \"kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.159193 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.159253 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.159309 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.159336 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xksc4\" (UniqueName: \"kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.162453 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.167020 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.168027 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.186822 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xksc4\" (UniqueName: \"kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4\") pod \"watcher-kuttl-applier-0\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.201738 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.257719 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.310905 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.464055 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca\") pod \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.464134 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwmm\" (UniqueName: \"kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm\") pod \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.464198 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs\") pod \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.464256 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data\") pod \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.464772 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs" (OuterVolumeSpecName: "logs") pod "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" (UID: "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.465161 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle\") pod \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\" (UID: \"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49\") " Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.465806 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.467972 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm" (OuterVolumeSpecName: "kube-api-access-mpwmm") pod "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" (UID: "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49"). InnerVolumeSpecName "kube-api-access-mpwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.490954 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" (UID: "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.497943 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373e7646-81da-4cb4-86e1-3b244b7301bc" path="/var/lib/kubelet/pods/373e7646-81da-4cb4-86e1-3b244b7301bc/volumes" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.504332 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" (UID: "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.512814 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data" (OuterVolumeSpecName: "config-data") pod "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" (UID: "3c9cc5c6-0f2e-47f4-8ce9-92456e859e49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.567105 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.567149 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.567196 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.567209 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwmm\" (UniqueName: \"kubernetes.io/projected/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49-kube-api-access-mpwmm\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.700477 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.873455 5023 generic.go:334] "Generic (PLEG): container finished" podID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" containerID="b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e" exitCode=0 Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.873554 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49","Type":"ContainerDied","Data":"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e"} Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.873607 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3c9cc5c6-0f2e-47f4-8ce9-92456e859e49","Type":"ContainerDied","Data":"7413b3512fa18ed360cd6868801650dde1885563f0c7d525e188fb9952f5b17a"} Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.873709 5023 scope.go:117] "RemoveContainer" containerID="b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.873813 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.875678 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9042ec14-7c5b-404c-8145-c7b4925ccfff","Type":"ContainerStarted","Data":"66d3251fda3204767749f406146053a8cce2d967f1c48a338f482ff0157b8458"} Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.891633 5023 scope.go:117] "RemoveContainer" containerID="b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e" Feb 19 08:22:33 crc kubenswrapper[5023]: E0219 08:22:33.891987 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e\": container with ID starting with b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e not found: ID does not exist" containerID="b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.892012 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e"} err="failed to get container status \"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e\": rpc error: code = NotFound desc = could not find container \"b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e\": container with ID starting with b1451d057c03edf72fe0569812508219e5a4f04a618fd9a3b9c17cc40ea84e2e not found: ID does not exist" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.920073 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.944277 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.955709 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:33 crc kubenswrapper[5023]: E0219 08:22:33.956116 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" containerName="watcher-decision-engine" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.956140 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" containerName="watcher-decision-engine" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.956349 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" containerName="watcher-decision-engine" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.957060 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.959278 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:22:33 crc kubenswrapper[5023]: I0219 08:22:33.963251 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.077699 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsnp\" (UniqueName: \"kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.077813 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.077852 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.077900 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.078003 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.178964 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.179707 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsnp\" (UniqueName: \"kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.179826 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.179903 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.179986 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.181359 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.185846 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.186317 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.189752 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.190176 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.199179 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsnp\" (UniqueName: \"kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.277299 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.877384 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:34 crc kubenswrapper[5023]: W0219 08:22:34.879909 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9befa013_a696_4478_9a41_c0b32da92bac.slice/crio-73675b4419e7856e35f84a4059e3c4962cffd74bbb4f114855caf7d67892f1ac WatchSource:0}: Error finding container 73675b4419e7856e35f84a4059e3c4962cffd74bbb4f114855caf7d67892f1ac: Status 404 returned error can't find the container with id 73675b4419e7856e35f84a4059e3c4962cffd74bbb4f114855caf7d67892f1ac Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.924392 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9042ec14-7c5b-404c-8145-c7b4925ccfff","Type":"ContainerStarted","Data":"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1"} Feb 19 08:22:34 crc kubenswrapper[5023]: I0219 08:22:34.946905 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.946885742 podStartE2EDuration="2.946885742s" podCreationTimestamp="2026-02-19 08:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:34.946408899 +0000 UTC m=+1312.603527847" watchObservedRunningTime="2026-02-19 08:22:34.946885742 +0000 UTC m=+1312.604004690" Feb 19 08:22:35 crc kubenswrapper[5023]: I0219 08:22:35.486655 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9cc5c6-0f2e-47f4-8ce9-92456e859e49" path="/var/lib/kubelet/pods/3c9cc5c6-0f2e-47f4-8ce9-92456e859e49/volumes" Feb 19 08:22:35 crc kubenswrapper[5023]: I0219 08:22:35.934123 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9befa013-a696-4478-9a41-c0b32da92bac","Type":"ContainerStarted","Data":"bc7deed97b0dc9fe54497e7053fc4ff2733e4dbd8589ca5394a80673a452a2ff"} Feb 19 08:22:35 crc kubenswrapper[5023]: I0219 08:22:35.934158 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9befa013-a696-4478-9a41-c0b32da92bac","Type":"ContainerStarted","Data":"73675b4419e7856e35f84a4059e3c4962cffd74bbb4f114855caf7d67892f1ac"} Feb 19 08:22:35 crc kubenswrapper[5023]: I0219 08:22:35.957645 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.957613868 podStartE2EDuration="2.957613868s" podCreationTimestamp="2026-02-19 08:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:35.954500516 +0000 UTC m=+1313.611619484" watchObservedRunningTime="2026-02-19 08:22:35.957613868 +0000 UTC m=+1313.614732816" Feb 19 08:22:36 crc kubenswrapper[5023]: I0219 08:22:36.924263 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:38 crc kubenswrapper[5023]: I0219 08:22:38.259191 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:39 crc kubenswrapper[5023]: I0219 08:22:39.190373 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:39 crc kubenswrapper[5023]: I0219 08:22:39.196985 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:39 crc kubenswrapper[5023]: I0219 08:22:39.974026 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:43 crc kubenswrapper[5023]: I0219 08:22:43.259498 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:43 crc kubenswrapper[5023]: I0219 08:22:43.288462 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:44 crc kubenswrapper[5023]: I0219 08:22:44.061171 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:44 crc kubenswrapper[5023]: I0219 08:22:44.277559 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:44 crc kubenswrapper[5023]: I0219 08:22:44.302527 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:45 crc kubenswrapper[5023]: I0219 08:22:45.032240 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:45 crc kubenswrapper[5023]: I0219 08:22:45.075281 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:46 crc kubenswrapper[5023]: I0219 08:22:46.974367 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz"] Feb 19 08:22:46 crc kubenswrapper[5023]: I0219 08:22:46.987870 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-8pgtz"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.043963 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher60f2-account-delete-q9gdd"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.045711 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.054348 5023 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-4l5d8\" not found" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.057003 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.091446 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher60f2-account-delete-q9gdd"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.100794 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.101666 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerName="watcher-applier" containerID="cri-o://6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" gracePeriod=30 Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.140118 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhwbp\" (UniqueName: \"kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.140179 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: E0219 08:22:47.140872 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:47 crc kubenswrapper[5023]: E0219 08:22:47.140914 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data podName:9befa013-a696-4478-9a41-c0b32da92bac nodeName:}" failed. No retries permitted until 2026-02-19 08:22:47.640896877 +0000 UTC m=+1325.298015825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "9befa013-a696-4478-9a41-c0b32da92bac") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.155592 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.155865 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-kuttl-api-log" containerID="cri-o://e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131" gracePeriod=30 Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.156291 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-api" containerID="cri-o://48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390" gracePeriod=30 Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.245361 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhwbp\" (UniqueName: \"kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.245434 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.246422 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.273840 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhwbp\" (UniqueName: \"kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp\") pod \"watcher60f2-account-delete-q9gdd\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.371346 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.536862 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb" path="/var/lib/kubelet/pods/fa0af5f7-60c2-4ee8-93f9-0fe2c093daeb/volumes" Feb 19 08:22:47 crc kubenswrapper[5023]: E0219 08:22:47.655017 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:47 crc kubenswrapper[5023]: E0219 08:22:47.655109 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data podName:9befa013-a696-4478-9a41-c0b32da92bac nodeName:}" failed. No retries permitted until 2026-02-19 08:22:48.655090789 +0000 UTC m=+1326.312209737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "9befa013-a696-4478-9a41-c0b32da92bac") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:47 crc kubenswrapper[5023]: I0219 08:22:47.849172 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher60f2-account-delete-q9gdd"] Feb 19 08:22:47 crc kubenswrapper[5023]: W0219 08:22:47.854813 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05cd629b_8e48_423a_a607_fd5a3309af40.slice/crio-3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5 WatchSource:0}: Error finding container 3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5: Status 404 returned error can't find the container with id 3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5 Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.070370 5023 generic.go:334] "Generic (PLEG): container finished" podID="4b28d888-a104-4e22-ba05-989521181dcb" containerID="e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131" exitCode=143 Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.070454 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerDied","Data":"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131"} Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.072006 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" event={"ID":"05cd629b-8e48-423a-a607-fd5a3309af40","Type":"ContainerStarted","Data":"007a819489803ea365f7078f0b7c3a4d2df350acab0892639a03fda3a812f4b6"} Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.072045 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" event={"ID":"05cd629b-8e48-423a-a607-fd5a3309af40","Type":"ContainerStarted","Data":"3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5"} Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.072060 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9befa013-a696-4478-9a41-c0b32da92bac" containerName="watcher-decision-engine" containerID="cri-o://bc7deed97b0dc9fe54497e7053fc4ff2733e4dbd8589ca5394a80673a452a2ff" gracePeriod=30 Feb 19 08:22:48 crc kubenswrapper[5023]: I0219 08:22:48.093237 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" podStartSLOduration=2.093222716 podStartE2EDuration="2.093222716s" podCreationTimestamp="2026-02-19 08:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:48.090853753 +0000 UTC m=+1325.747972701" watchObservedRunningTime="2026-02-19 08:22:48.093222716 +0000 UTC m=+1325.750341664" Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.260441 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.261987 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.263514 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.263581 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerName="watcher-applier" Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.671171 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:48 crc kubenswrapper[5023]: E0219 08:22:48.671740 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data podName:9befa013-a696-4478-9a41-c0b32da92bac nodeName:}" failed. No retries permitted until 2026-02-19 08:22:50.671716262 +0000 UTC m=+1328.328835210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "9befa013-a696-4478-9a41-c0b32da92bac") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.021763 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.081969 5023 generic.go:334] "Generic (PLEG): container finished" podID="4b28d888-a104-4e22-ba05-989521181dcb" containerID="48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390" exitCode=0 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.082036 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerDied","Data":"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390"} Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.082065 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b28d888-a104-4e22-ba05-989521181dcb","Type":"ContainerDied","Data":"c46b1e7421c03f7965fc25067cbd1b812acd775cbe7b634c0cad7009e982d416"} Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.082087 5023 scope.go:117] "RemoveContainer" containerID="48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.082234 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085122 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle\") pod \"4b28d888-a104-4e22-ba05-989521181dcb\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085172 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca\") pod \"4b28d888-a104-4e22-ba05-989521181dcb\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085205 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data\") pod \"4b28d888-a104-4e22-ba05-989521181dcb\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085235 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs\") pod \"4b28d888-a104-4e22-ba05-989521181dcb\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085277 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmts7\" (UniqueName: \"kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7\") pod \"4b28d888-a104-4e22-ba05-989521181dcb\" (UID: \"4b28d888-a104-4e22-ba05-989521181dcb\") " Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085280 5023 generic.go:334] "Generic (PLEG): container finished" podID="05cd629b-8e48-423a-a607-fd5a3309af40" containerID="007a819489803ea365f7078f0b7c3a4d2df350acab0892639a03fda3a812f4b6" exitCode=0 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.085335 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" event={"ID":"05cd629b-8e48-423a-a607-fd5a3309af40","Type":"ContainerDied","Data":"007a819489803ea365f7078f0b7c3a4d2df350acab0892639a03fda3a812f4b6"} Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.086363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs" (OuterVolumeSpecName: "logs") pod "4b28d888-a104-4e22-ba05-989521181dcb" (UID: "4b28d888-a104-4e22-ba05-989521181dcb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.121914 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b28d888-a104-4e22-ba05-989521181dcb" (UID: "4b28d888-a104-4e22-ba05-989521181dcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.123924 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7" (OuterVolumeSpecName: "kube-api-access-lmts7") pod "4b28d888-a104-4e22-ba05-989521181dcb" (UID: "4b28d888-a104-4e22-ba05-989521181dcb"). InnerVolumeSpecName "kube-api-access-lmts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.144980 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data" (OuterVolumeSpecName: "config-data") pod "4b28d888-a104-4e22-ba05-989521181dcb" (UID: "4b28d888-a104-4e22-ba05-989521181dcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.157074 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4b28d888-a104-4e22-ba05-989521181dcb" (UID: "4b28d888-a104-4e22-ba05-989521181dcb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.187275 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b28d888-a104-4e22-ba05-989521181dcb-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.187316 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmts7\" (UniqueName: \"kubernetes.io/projected/4b28d888-a104-4e22-ba05-989521181dcb-kube-api-access-lmts7\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.187328 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.187339 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.187350 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b28d888-a104-4e22-ba05-989521181dcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.193247 5023 scope.go:117] "RemoveContainer" containerID="e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.219888 5023 scope.go:117] "RemoveContainer" containerID="48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390" Feb 19 08:22:49 crc kubenswrapper[5023]: E0219 08:22:49.220289 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390\": container with ID starting with 48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390 not found: ID does not exist" containerID="48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.220320 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390"} err="failed to get container status \"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390\": rpc error: code = NotFound desc = could not find container \"48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390\": container with ID starting with 48126b8e3d4875c8f1a3c965fe628f543ab2ab845090166342378e6448bce390 not found: ID does not exist" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.220340 5023 scope.go:117] "RemoveContainer" containerID="e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131" Feb 19 08:22:49 crc kubenswrapper[5023]: E0219 08:22:49.220821 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131\": container with ID starting with e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131 not found: ID does not exist" containerID="e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.220840 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131"} err="failed to get container status \"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131\": rpc error: code = NotFound desc = could not find container \"e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131\": container with ID starting with e6ba542688eb0503614390896e58804bc1dcf3908aa2f0c2bfafc37a7b614131 not found: ID does not exist" Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.268084 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.268480 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-central-agent" containerID="cri-o://8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b" gracePeriod=30 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.268583 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="sg-core" containerID="cri-o://8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd" gracePeriod=30 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.268589 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="proxy-httpd" containerID="cri-o://5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d" gracePeriod=30 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.268589 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-notification-agent" containerID="cri-o://92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14" gracePeriod=30 Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.472522 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:49 crc kubenswrapper[5023]: I0219 08:22:49.486387 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.121854 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerID="5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d" exitCode=0 Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.122376 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerID="8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd" exitCode=2 Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.122438 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerID="8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b" exitCode=0 Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.122544 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerDied","Data":"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d"} Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.122641 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerDied","Data":"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd"} Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.122716 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerDied","Data":"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b"} Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.458684 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.507654 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhwbp\" (UniqueName: \"kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp\") pod \"05cd629b-8e48-423a-a607-fd5a3309af40\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.507817 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts\") pod \"05cd629b-8e48-423a-a607-fd5a3309af40\" (UID: \"05cd629b-8e48-423a-a607-fd5a3309af40\") " Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.509248 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05cd629b-8e48-423a-a607-fd5a3309af40" (UID: "05cd629b-8e48-423a-a607-fd5a3309af40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.515993 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp" (OuterVolumeSpecName: "kube-api-access-vhwbp") pod "05cd629b-8e48-423a-a607-fd5a3309af40" (UID: "05cd629b-8e48-423a-a607-fd5a3309af40"). InnerVolumeSpecName "kube-api-access-vhwbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.609005 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05cd629b-8e48-423a-a607-fd5a3309af40-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.609036 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhwbp\" (UniqueName: \"kubernetes.io/projected/05cd629b-8e48-423a-a607-fd5a3309af40-kube-api-access-vhwbp\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.665305 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.709658 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xksc4\" (UniqueName: \"kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4\") pod \"9042ec14-7c5b-404c-8145-c7b4925ccfff\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.709776 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle\") pod \"9042ec14-7c5b-404c-8145-c7b4925ccfff\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.709813 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data\") pod \"9042ec14-7c5b-404c-8145-c7b4925ccfff\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.710337 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs\") pod \"9042ec14-7c5b-404c-8145-c7b4925ccfff\" (UID: \"9042ec14-7c5b-404c-8145-c7b4925ccfff\") " Feb 19 08:22:50 crc kubenswrapper[5023]: E0219 08:22:50.710797 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:50 crc kubenswrapper[5023]: E0219 08:22:50.710851 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data podName:9befa013-a696-4478-9a41-c0b32da92bac nodeName:}" failed. No retries permitted until 2026-02-19 08:22:54.710834961 +0000 UTC m=+1332.367953909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "9befa013-a696-4478-9a41-c0b32da92bac") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.711118 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs" (OuterVolumeSpecName: "logs") pod "9042ec14-7c5b-404c-8145-c7b4925ccfff" (UID: "9042ec14-7c5b-404c-8145-c7b4925ccfff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.713345 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4" (OuterVolumeSpecName: "kube-api-access-xksc4") pod "9042ec14-7c5b-404c-8145-c7b4925ccfff" (UID: "9042ec14-7c5b-404c-8145-c7b4925ccfff"). InnerVolumeSpecName "kube-api-access-xksc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.734426 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9042ec14-7c5b-404c-8145-c7b4925ccfff" (UID: "9042ec14-7c5b-404c-8145-c7b4925ccfff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.748928 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data" (OuterVolumeSpecName: "config-data") pod "9042ec14-7c5b-404c-8145-c7b4925ccfff" (UID: "9042ec14-7c5b-404c-8145-c7b4925ccfff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.811988 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9042ec14-7c5b-404c-8145-c7b4925ccfff-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.812023 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xksc4\" (UniqueName: \"kubernetes.io/projected/9042ec14-7c5b-404c-8145-c7b4925ccfff-kube-api-access-xksc4\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.812035 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:50 crc kubenswrapper[5023]: I0219 08:22:50.812044 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9042ec14-7c5b-404c-8145-c7b4925ccfff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.136591 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" event={"ID":"05cd629b-8e48-423a-a607-fd5a3309af40","Type":"ContainerDied","Data":"3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5"} Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.136644 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f4f439b1b2317a5dc1b9aef4fca1b68140d64363d954b3504538795d68f04c5" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.136692 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher60f2-account-delete-q9gdd" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.139489 5023 generic.go:334] "Generic (PLEG): container finished" podID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" exitCode=0 Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.139517 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9042ec14-7c5b-404c-8145-c7b4925ccfff","Type":"ContainerDied","Data":"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1"} Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.139541 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"9042ec14-7c5b-404c-8145-c7b4925ccfff","Type":"ContainerDied","Data":"66d3251fda3204767749f406146053a8cce2d967f1c48a338f482ff0157b8458"} Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.139557 5023 scope.go:117] "RemoveContainer" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.139584 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.171743 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.171915 5023 scope.go:117] "RemoveContainer" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" Feb 19 08:22:51 crc kubenswrapper[5023]: E0219 08:22:51.172640 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1\": container with ID starting with 6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1 not found: ID does not exist" containerID="6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.172675 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1"} err="failed to get container status \"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1\": rpc error: code = NotFound desc = could not find container \"6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1\": container with ID starting with 6bc5b8e09217ad79ecf2e7015119d922594a21cba786325ea872f1eddabea9f1 not found: ID does not exist" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.178419 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.485860 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b28d888-a104-4e22-ba05-989521181dcb" path="/var/lib/kubelet/pods/4b28d888-a104-4e22-ba05-989521181dcb/volumes" Feb 19 08:22:51 crc kubenswrapper[5023]: I0219 08:22:51.486608 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" path="/var/lib/kubelet/pods/9042ec14-7c5b-404c-8145-c7b4925ccfff/volumes" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.091387 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher60f2-account-delete-q9gdd"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.134461 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xwcrt"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.150583 5023 generic.go:334] "Generic (PLEG): container finished" podID="9befa013-a696-4478-9a41-c0b32da92bac" containerID="bc7deed97b0dc9fe54497e7053fc4ff2733e4dbd8589ca5394a80673a452a2ff" exitCode=0 Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.150636 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9befa013-a696-4478-9a41-c0b32da92bac","Type":"ContainerDied","Data":"bc7deed97b0dc9fe54497e7053fc4ff2733e4dbd8589ca5394a80673a452a2ff"} Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.158343 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.165468 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-xwcrt"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.172258 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher60f2-account-delete-q9gdd"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.179419 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-60f2-account-create-update-pg8xr"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.210472 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.238193 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca\") pod \"9befa013-a696-4478-9a41-c0b32da92bac\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.238405 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs\") pod \"9befa013-a696-4478-9a41-c0b32da92bac\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.238452 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crsnp\" (UniqueName: \"kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp\") pod \"9befa013-a696-4478-9a41-c0b32da92bac\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.238498 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle\") pod \"9befa013-a696-4478-9a41-c0b32da92bac\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.239024 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs" (OuterVolumeSpecName: "logs") pod "9befa013-a696-4478-9a41-c0b32da92bac" (UID: "9befa013-a696-4478-9a41-c0b32da92bac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.239509 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data\") pod \"9befa013-a696-4478-9a41-c0b32da92bac\" (UID: \"9befa013-a696-4478-9a41-c0b32da92bac\") " Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.239959 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9befa013-a696-4478-9a41-c0b32da92bac-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.251885 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp" (OuterVolumeSpecName: "kube-api-access-crsnp") pod "9befa013-a696-4478-9a41-c0b32da92bac" (UID: "9befa013-a696-4478-9a41-c0b32da92bac"). InnerVolumeSpecName "kube-api-access-crsnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.253732 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-lj4xm"] Feb 19 08:22:52 crc kubenswrapper[5023]: E0219 08:22:52.254101 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerName="watcher-applier" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254117 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerName="watcher-applier" Feb 19 08:22:52 crc kubenswrapper[5023]: E0219 08:22:52.254129 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cd629b-8e48-423a-a607-fd5a3309af40" containerName="mariadb-account-delete" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254135 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cd629b-8e48-423a-a607-fd5a3309af40" containerName="mariadb-account-delete" Feb 19 08:22:52 crc kubenswrapper[5023]: E0219 08:22:52.254150 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-api" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254159 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-api" Feb 19 08:22:52 crc kubenswrapper[5023]: E0219 08:22:52.254173 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9befa013-a696-4478-9a41-c0b32da92bac" containerName="watcher-decision-engine" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254179 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="9befa013-a696-4478-9a41-c0b32da92bac" containerName="watcher-decision-engine" Feb 19 08:22:52 crc kubenswrapper[5023]: E0219 08:22:52.254196 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-kuttl-api-log" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254203 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-kuttl-api-log" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254346 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="9befa013-a696-4478-9a41-c0b32da92bac" containerName="watcher-decision-engine" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254360 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-kuttl-api-log" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254369 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="9042ec14-7c5b-404c-8145-c7b4925ccfff" containerName="watcher-applier" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254374 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cd629b-8e48-423a-a607-fd5a3309af40" containerName="mariadb-account-delete" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.254390 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b28d888-a104-4e22-ba05-989521181dcb" containerName="watcher-api" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.255117 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.279806 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9befa013-a696-4478-9a41-c0b32da92bac" (UID: "9befa013-a696-4478-9a41-c0b32da92bac"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.283319 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9befa013-a696-4478-9a41-c0b32da92bac" (UID: "9befa013-a696-4478-9a41-c0b32da92bac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.307978 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lj4xm"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.312082 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data" (OuterVolumeSpecName: "config-data") pod "9befa013-a696-4478-9a41-c0b32da92bac" (UID: "9befa013-a696-4478-9a41-c0b32da92bac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.313786 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-5046-account-create-update-njk7n"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.314949 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.317079 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.340546 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctddt\" (UniqueName: \"kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.340810 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.340887 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.341013 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9shj\" (UniqueName: \"kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.341132 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.341190 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.341244 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crsnp\" (UniqueName: \"kubernetes.io/projected/9befa013-a696-4478-9a41-c0b32da92bac-kube-api-access-crsnp\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.341302 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9befa013-a696-4478-9a41-c0b32da92bac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.350915 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5046-account-create-update-njk7n"] Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.443258 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctddt\" (UniqueName: \"kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.443651 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.443786 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.443947 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9shj\" (UniqueName: \"kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.444640 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.444659 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.463294 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctddt\" (UniqueName: \"kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt\") pod \"watcher-5046-account-create-update-njk7n\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.463426 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9shj\" (UniqueName: \"kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj\") pod \"watcher-db-create-lj4xm\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.610206 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:52 crc kubenswrapper[5023]: I0219 08:22:52.660610 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.114885 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lj4xm"] Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.159880 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lj4xm" event={"ID":"8b8a723e-a22d-4601-a71c-c9145b58da3a","Type":"ContainerStarted","Data":"85c45bc9026fe96b74151803c163b217feead75e3335fe5c9ca7f0933f379783"} Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.162169 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9befa013-a696-4478-9a41-c0b32da92bac","Type":"ContainerDied","Data":"73675b4419e7856e35f84a4059e3c4962cffd74bbb4f114855caf7d67892f1ac"} Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.162235 5023 scope.go:117] "RemoveContainer" containerID="bc7deed97b0dc9fe54497e7053fc4ff2733e4dbd8589ca5394a80673a452a2ff" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.162285 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.229698 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5046-account-create-update-njk7n"] Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.242536 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.252293 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.495690 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cd629b-8e48-423a-a607-fd5a3309af40" path="/var/lib/kubelet/pods/05cd629b-8e48-423a-a607-fd5a3309af40/volumes" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.502479 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069710ef-dc0f-4e31-a6e0-72bd60aaa878" path="/var/lib/kubelet/pods/069710ef-dc0f-4e31-a6e0-72bd60aaa878/volumes" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.503091 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c398db-8586-4cff-a9cf-0b61425ff87f" path="/var/lib/kubelet/pods/38c398db-8586-4cff-a9cf-0b61425ff87f/volumes" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.503611 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9befa013-a696-4478-9a41-c0b32da92bac" path="/var/lib/kubelet/pods/9befa013-a696-4478-9a41-c0b32da92bac/volumes" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.871231 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.974980 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975099 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975127 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975189 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwcrw\" (UniqueName: \"kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975241 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975274 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975294 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml\") pod \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\" (UID: \"f5e63977-90bc-4e8d-8597-8e87ad5966c4\") " Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.975961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.976146 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.983962 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts" (OuterVolumeSpecName: "scripts") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:53 crc kubenswrapper[5023]: I0219 08:22:53.984404 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw" (OuterVolumeSpecName: "kube-api-access-pwcrw") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "kube-api-access-pwcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.017009 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.017979 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.052078 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076811 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwcrw\" (UniqueName: \"kubernetes.io/projected/f5e63977-90bc-4e8d-8597-8e87ad5966c4-kube-api-access-pwcrw\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076865 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076877 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076886 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076895 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076903 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.076911 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5e63977-90bc-4e8d-8597-8e87ad5966c4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.082819 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data" (OuterVolumeSpecName: "config-data") pod "f5e63977-90bc-4e8d-8597-8e87ad5966c4" (UID: "f5e63977-90bc-4e8d-8597-8e87ad5966c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.175464 5023 generic.go:334] "Generic (PLEG): container finished" podID="8b8a723e-a22d-4601-a71c-c9145b58da3a" containerID="6dc37f692f324c1c018d21f4a2e2f05ba852fbb58deb874120db64abd04fe040" exitCode=0 Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.175538 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lj4xm" event={"ID":"8b8a723e-a22d-4601-a71c-c9145b58da3a","Type":"ContainerDied","Data":"6dc37f692f324c1c018d21f4a2e2f05ba852fbb58deb874120db64abd04fe040"} Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.178135 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e63977-90bc-4e8d-8597-8e87ad5966c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.191091 5023 generic.go:334] "Generic (PLEG): container finished" podID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerID="92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14" exitCode=0 Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.191269 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerDied","Data":"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14"} Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.191325 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f5e63977-90bc-4e8d-8597-8e87ad5966c4","Type":"ContainerDied","Data":"e5f339c74bfb8fd88357350a240d4607f3e7ab5bf09a44c0224d730910644119"} Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.191362 5023 scope.go:117] "RemoveContainer" containerID="5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.191812 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.202223 5023 generic.go:334] "Generic (PLEG): container finished" podID="6a5b9f83-8d00-411b-83dc-bcf3872c3451" containerID="335bc3aa740c637dd201b05f8900bea454014e637bdc45d736de8189af556440" exitCode=0 Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.202318 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" event={"ID":"6a5b9f83-8d00-411b-83dc-bcf3872c3451","Type":"ContainerDied","Data":"335bc3aa740c637dd201b05f8900bea454014e637bdc45d736de8189af556440"} Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.202351 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" event={"ID":"6a5b9f83-8d00-411b-83dc-bcf3872c3451","Type":"ContainerStarted","Data":"31ca8a1b7046378dad8a6e1830f8bbeb4d7d19425066698d2e5ba4e0fabfa4bf"} Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.225987 5023 scope.go:117] "RemoveContainer" containerID="8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.257626 5023 scope.go:117] "RemoveContainer" containerID="92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.266127 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.289776 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.299870 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.300418 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-notification-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300444 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-notification-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.300461 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-central-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300471 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-central-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.300508 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="sg-core" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300514 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="sg-core" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.300523 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="proxy-httpd" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300581 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="proxy-httpd" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300776 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-central-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300838 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="proxy-httpd" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300862 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="ceilometer-notification-agent" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.300875 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" containerName="sg-core" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.302796 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.306888 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.308387 5023 scope.go:117] "RemoveContainer" containerID="8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.308588 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.329519 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.329531 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.409463 5023 scope.go:117] "RemoveContainer" containerID="5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.410648 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d\": container with ID starting with 5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d not found: ID does not exist" containerID="5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.410689 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d"} err="failed to get container status \"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d\": rpc error: code = NotFound desc = could not find container \"5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d\": container with ID starting with 5d8e9aacd0ad6e536126dde89725a3aa905ea014c772111034869a08ae791a6d not found: ID does not exist" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.410717 5023 scope.go:117] "RemoveContainer" containerID="8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.410988 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd\": container with ID starting with 8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd not found: ID does not exist" containerID="8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.411036 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd"} err="failed to get container status \"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd\": rpc error: code = NotFound desc = could not find container \"8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd\": container with ID starting with 8481a2d32f3cddc7fa26adc010480550ffcd21d23acadcb841673a2ab21ba6dd not found: ID does not exist" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.411051 5023 scope.go:117] "RemoveContainer" containerID="92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.411308 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14\": container with ID starting with 92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14 not found: ID does not exist" containerID="92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.411329 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14"} err="failed to get container status \"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14\": rpc error: code = NotFound desc = could not find container \"92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14\": container with ID starting with 92287758e37e4454285f2485ac2a7ff7f2bf65ca9784b2b3088085f615d83a14 not found: ID does not exist" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.411358 5023 scope.go:117] "RemoveContainer" containerID="8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b" Feb 19 08:22:54 crc kubenswrapper[5023]: E0219 08:22:54.411715 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b\": container with ID starting with 8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b not found: ID does not exist" containerID="8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.411758 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b"} err="failed to get container status \"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b\": rpc error: code = NotFound desc = could not find container \"8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b\": container with ID starting with 8ca201ae15500942e5ad263d1b8ee9e1240ec9275bafb8e28e2d82c079da7a5b not found: ID does not exist" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.485876 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2xn\" (UniqueName: \"kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.485975 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486056 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486141 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486212 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486356 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486495 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.486524 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590515 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590572 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590594 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590616 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590664 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2xn\" (UniqueName: \"kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590679 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590705 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.590740 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.592834 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.592893 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.602731 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.602836 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.603128 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.603281 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.604603 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.622179 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2xn\" (UniqueName: \"kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn\") pod \"ceilometer-0\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:54 crc kubenswrapper[5023]: I0219 08:22:54.713470 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.159560 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:22:55 crc kubenswrapper[5023]: W0219 08:22:55.164096 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc150d099_6881_4e8d_9942_12909cbdf3b7.slice/crio-21751efb0bef0f3f0a6a3f36e91f1f1d44b18b520a7639774a8f92ed5a020449 WatchSource:0}: Error finding container 21751efb0bef0f3f0a6a3f36e91f1f1d44b18b520a7639774a8f92ed5a020449: Status 404 returned error can't find the container with id 21751efb0bef0f3f0a6a3f36e91f1f1d44b18b520a7639774a8f92ed5a020449 Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.228941 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerStarted","Data":"21751efb0bef0f3f0a6a3f36e91f1f1d44b18b520a7639774a8f92ed5a020449"} Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.505042 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e63977-90bc-4e8d-8597-8e87ad5966c4" path="/var/lib/kubelet/pods/f5e63977-90bc-4e8d-8597-8e87ad5966c4/volumes" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.732260 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.742434 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.919433 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9shj\" (UniqueName: \"kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj\") pod \"8b8a723e-a22d-4601-a71c-c9145b58da3a\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.920220 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctddt\" (UniqueName: \"kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt\") pod \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.920578 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts\") pod \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\" (UID: \"6a5b9f83-8d00-411b-83dc-bcf3872c3451\") " Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.921577 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a5b9f83-8d00-411b-83dc-bcf3872c3451" (UID: "6a5b9f83-8d00-411b-83dc-bcf3872c3451"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.922880 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts\") pod \"8b8a723e-a22d-4601-a71c-c9145b58da3a\" (UID: \"8b8a723e-a22d-4601-a71c-c9145b58da3a\") " Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.923515 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b8a723e-a22d-4601-a71c-c9145b58da3a" (UID: "8b8a723e-a22d-4601-a71c-c9145b58da3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.923984 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8a723e-a22d-4601-a71c-c9145b58da3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.924081 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a5b9f83-8d00-411b-83dc-bcf3872c3451-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.926272 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj" (OuterVolumeSpecName: "kube-api-access-v9shj") pod "8b8a723e-a22d-4601-a71c-c9145b58da3a" (UID: "8b8a723e-a22d-4601-a71c-c9145b58da3a"). InnerVolumeSpecName "kube-api-access-v9shj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:55 crc kubenswrapper[5023]: I0219 08:22:55.926906 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt" (OuterVolumeSpecName: "kube-api-access-ctddt") pod "6a5b9f83-8d00-411b-83dc-bcf3872c3451" (UID: "6a5b9f83-8d00-411b-83dc-bcf3872c3451"). InnerVolumeSpecName "kube-api-access-ctddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.024842 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9shj\" (UniqueName: \"kubernetes.io/projected/8b8a723e-a22d-4601-a71c-c9145b58da3a-kube-api-access-v9shj\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.024892 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctddt\" (UniqueName: \"kubernetes.io/projected/6a5b9f83-8d00-411b-83dc-bcf3872c3451-kube-api-access-ctddt\") on node \"crc\" DevicePath \"\"" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.238881 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.238878 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5046-account-create-update-njk7n" event={"ID":"6a5b9f83-8d00-411b-83dc-bcf3872c3451","Type":"ContainerDied","Data":"31ca8a1b7046378dad8a6e1830f8bbeb4d7d19425066698d2e5ba4e0fabfa4bf"} Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.238969 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ca8a1b7046378dad8a6e1830f8bbeb4d7d19425066698d2e5ba4e0fabfa4bf" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.240020 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lj4xm" event={"ID":"8b8a723e-a22d-4601-a71c-c9145b58da3a","Type":"ContainerDied","Data":"85c45bc9026fe96b74151803c163b217feead75e3335fe5c9ca7f0933f379783"} Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.240058 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c45bc9026fe96b74151803c163b217feead75e3335fe5c9ca7f0933f379783" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.240126 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lj4xm" Feb 19 08:22:56 crc kubenswrapper[5023]: I0219 08:22:56.241717 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerStarted","Data":"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5"} Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.256901 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerStarted","Data":"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927"} Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.257187 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerStarted","Data":"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3"} Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.739490 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg"] Feb 19 08:22:57 crc kubenswrapper[5023]: E0219 08:22:57.740150 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5b9f83-8d00-411b-83dc-bcf3872c3451" containerName="mariadb-account-create-update" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.740170 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5b9f83-8d00-411b-83dc-bcf3872c3451" containerName="mariadb-account-create-update" Feb 19 08:22:57 crc kubenswrapper[5023]: E0219 08:22:57.740196 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8a723e-a22d-4601-a71c-c9145b58da3a" containerName="mariadb-database-create" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.740206 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8a723e-a22d-4601-a71c-c9145b58da3a" containerName="mariadb-database-create" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.740432 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5b9f83-8d00-411b-83dc-bcf3872c3451" containerName="mariadb-account-create-update" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.740451 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8a723e-a22d-4601-a71c-c9145b58da3a" containerName="mariadb-database-create" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.741157 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.743604 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.747472 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-tp724" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.755901 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.755958 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.755992 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76rd\" (UniqueName: \"kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.756083 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.762108 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg"] Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.857590 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.857718 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.857771 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.857802 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76rd\" (UniqueName: \"kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.866283 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.866468 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.869887 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:57 crc kubenswrapper[5023]: I0219 08:22:57.878677 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76rd\" (UniqueName: \"kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd\") pod \"watcher-kuttl-db-sync-2hgbg\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:58 crc kubenswrapper[5023]: I0219 08:22:58.055277 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:22:58 crc kubenswrapper[5023]: W0219 08:22:58.509098 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c5f4de9_78af_454e_839d_3d21667acac2.slice/crio-94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc WatchSource:0}: Error finding container 94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc: Status 404 returned error can't find the container with id 94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc Feb 19 08:22:58 crc kubenswrapper[5023]: I0219 08:22:58.515975 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg"] Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.274481 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" event={"ID":"4c5f4de9-78af-454e-839d-3d21667acac2","Type":"ContainerStarted","Data":"6363cb17270ab2befd2183b68c08f6b98f5d742f6dbc4d7ebbd5b4801810e23b"} Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.274822 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" event={"ID":"4c5f4de9-78af-454e-839d-3d21667acac2","Type":"ContainerStarted","Data":"94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc"} Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.277106 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerStarted","Data":"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961"} Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.277289 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.292526 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" podStartSLOduration=2.292507937 podStartE2EDuration="2.292507937s" podCreationTimestamp="2026-02-19 08:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:22:59.289452476 +0000 UTC m=+1336.946571424" watchObservedRunningTime="2026-02-19 08:22:59.292507937 +0000 UTC m=+1336.949626885" Feb 19 08:22:59 crc kubenswrapper[5023]: I0219 08:22:59.333469 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.678880964 podStartE2EDuration="5.333442111s" podCreationTimestamp="2026-02-19 08:22:54 +0000 UTC" firstStartedPulling="2026-02-19 08:22:55.166783338 +0000 UTC m=+1332.823902286" lastFinishedPulling="2026-02-19 08:22:58.821344485 +0000 UTC m=+1336.478463433" observedRunningTime="2026-02-19 08:22:59.328040198 +0000 UTC m=+1336.985159156" watchObservedRunningTime="2026-02-19 08:22:59.333442111 +0000 UTC m=+1336.990561069" Feb 19 08:23:01 crc kubenswrapper[5023]: I0219 08:23:01.293160 5023 generic.go:334] "Generic (PLEG): container finished" podID="4c5f4de9-78af-454e-839d-3d21667acac2" containerID="6363cb17270ab2befd2183b68c08f6b98f5d742f6dbc4d7ebbd5b4801810e23b" exitCode=0 Feb 19 08:23:01 crc kubenswrapper[5023]: I0219 08:23:01.293526 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" event={"ID":"4c5f4de9-78af-454e-839d-3d21667acac2","Type":"ContainerDied","Data":"6363cb17270ab2befd2183b68c08f6b98f5d742f6dbc4d7ebbd5b4801810e23b"} Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.650565 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.835696 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j76rd\" (UniqueName: \"kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd\") pod \"4c5f4de9-78af-454e-839d-3d21667acac2\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.835760 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle\") pod \"4c5f4de9-78af-454e-839d-3d21667acac2\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.835835 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data\") pod \"4c5f4de9-78af-454e-839d-3d21667acac2\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.835869 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data\") pod \"4c5f4de9-78af-454e-839d-3d21667acac2\" (UID: \"4c5f4de9-78af-454e-839d-3d21667acac2\") " Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.841843 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c5f4de9-78af-454e-839d-3d21667acac2" (UID: "4c5f4de9-78af-454e-839d-3d21667acac2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.856841 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd" (OuterVolumeSpecName: "kube-api-access-j76rd") pod "4c5f4de9-78af-454e-839d-3d21667acac2" (UID: "4c5f4de9-78af-454e-839d-3d21667acac2"). InnerVolumeSpecName "kube-api-access-j76rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.858910 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c5f4de9-78af-454e-839d-3d21667acac2" (UID: "4c5f4de9-78af-454e-839d-3d21667acac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.877128 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data" (OuterVolumeSpecName: "config-data") pod "4c5f4de9-78af-454e-839d-3d21667acac2" (UID: "4c5f4de9-78af-454e-839d-3d21667acac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.938317 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j76rd\" (UniqueName: \"kubernetes.io/projected/4c5f4de9-78af-454e-839d-3d21667acac2-kube-api-access-j76rd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.938705 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.938717 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:02 crc kubenswrapper[5023]: I0219 08:23:02.938726 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5f4de9-78af-454e-839d-3d21667acac2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.310116 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" event={"ID":"4c5f4de9-78af-454e-839d-3d21667acac2","Type":"ContainerDied","Data":"94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc"} Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.310156 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b430625e44d1384c07a760e53699bd2deada4ffa2c10f3ed9e57052298abbc" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.310490 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.608579 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: E0219 08:23:03.609305 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5f4de9-78af-454e-839d-3d21667acac2" containerName="watcher-kuttl-db-sync" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.609393 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5f4de9-78af-454e-839d-3d21667acac2" containerName="watcher-kuttl-db-sync" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.609688 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5f4de9-78af-454e-839d-3d21667acac2" containerName="watcher-kuttl-db-sync" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.610891 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.614133 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-tp724" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.614708 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.624082 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.733862 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.735000 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.738147 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757076 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757119 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4f7l\" (UniqueName: \"kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757318 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757524 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.757709 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.779608 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.781077 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.784462 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.802167 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.862957 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.863029 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.863054 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4f7l\" (UniqueName: \"kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.863095 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868143 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868330 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rs6x\" (UniqueName: \"kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868373 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868771 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868883 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8mhq\" (UniqueName: \"kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868934 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.868969 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.869020 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.869039 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.869096 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.869130 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.869280 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.872590 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.875438 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.900779 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4f7l\" (UniqueName: \"kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l\") pod \"watcher-kuttl-api-0\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.929596 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.971230 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8mhq\" (UniqueName: \"kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972134 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972175 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972211 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972235 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972268 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972307 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972352 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rs6x\" (UniqueName: \"kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.972377 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.973865 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.975309 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.976664 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.979845 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.980402 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:03 crc kubenswrapper[5023]: I0219 08:23:03.982764 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:03.997263 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8mhq\" (UniqueName: \"kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq\") pod \"watcher-kuttl-applier-0\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:03.998837 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rs6x\" (UniqueName: \"kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.005863 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.072788 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.103912 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.448263 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.592922 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:04 crc kubenswrapper[5023]: I0219 08:23:04.690913 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:04 crc kubenswrapper[5023]: W0219 08:23:04.691280 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd00d9f4f_c619_4df9_b9da_3ffe552c6bf0.slice/crio-c9066be5062b2eda3180e88ff29ac9c1ccc86c8bd6dae4254f59a9bed9ae203c WatchSource:0}: Error finding container c9066be5062b2eda3180e88ff29ac9c1ccc86c8bd6dae4254f59a9bed9ae203c: Status 404 returned error can't find the container with id c9066be5062b2eda3180e88ff29ac9c1ccc86c8bd6dae4254f59a9bed9ae203c Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.327104 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerStarted","Data":"5e0fffb7c9bc56ea1c40371fc446d0ec7fe06d70da65223093384d3b9159ae44"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.327719 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.327735 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerStarted","Data":"2601b31bc3770a4e0a92d1b396e74023d42e3cd7de3535441cfabd56a5398ad9"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.327746 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerStarted","Data":"2f42683be3e3364aff61f757609dfa52a045ec16086e90c0faef7c3babfa646e"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.329013 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0","Type":"ContainerStarted","Data":"41cbf4f9f684597d24658deb5cb8747db79a167a786104bedcff7a93136138c8"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.329067 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0","Type":"ContainerStarted","Data":"c9066be5062b2eda3180e88ff29ac9c1ccc86c8bd6dae4254f59a9bed9ae203c"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.331331 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5adb8213-1e84-471c-952e-11abe1f09ff8","Type":"ContainerStarted","Data":"f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.331374 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5adb8213-1e84-471c-952e-11abe1f09ff8","Type":"ContainerStarted","Data":"8f24a39db071a046d4509ae7bf44c1f9b51234a796c6b59b6ab8c9ab4788025c"} Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.354364 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.354340536 podStartE2EDuration="2.354340536s" podCreationTimestamp="2026-02-19 08:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:05.347153756 +0000 UTC m=+1343.004272704" watchObservedRunningTime="2026-02-19 08:23:05.354340536 +0000 UTC m=+1343.011459484" Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.367588 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.367566486 podStartE2EDuration="2.367566486s" podCreationTimestamp="2026-02-19 08:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:05.362489652 +0000 UTC m=+1343.019608600" watchObservedRunningTime="2026-02-19 08:23:05.367566486 +0000 UTC m=+1343.024685464" Feb 19 08:23:05 crc kubenswrapper[5023]: I0219 08:23:05.388441 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.388422549 podStartE2EDuration="2.388422549s" podCreationTimestamp="2026-02-19 08:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:05.382244585 +0000 UTC m=+1343.039363533" watchObservedRunningTime="2026-02-19 08:23:05.388422549 +0000 UTC m=+1343.045541497" Feb 19 08:23:07 crc kubenswrapper[5023]: I0219 08:23:07.755082 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:08 crc kubenswrapper[5023]: I0219 08:23:08.931327 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:09 crc kubenswrapper[5023]: I0219 08:23:09.073757 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:13 crc kubenswrapper[5023]: I0219 08:23:13.930926 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:13 crc kubenswrapper[5023]: I0219 08:23:13.937366 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.073889 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.099059 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.104857 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.133644 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.417672 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.423048 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.458201 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:14 crc kubenswrapper[5023]: I0219 08:23:14.466343 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.677000 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.677533 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-central-agent" containerID="cri-o://350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5" gracePeriod=30 Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.677579 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="sg-core" containerID="cri-o://a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927" gracePeriod=30 Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.677635 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-notification-agent" containerID="cri-o://18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3" gracePeriod=30 Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.677637 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="proxy-httpd" containerID="cri-o://dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961" gracePeriod=30 Feb 19 08:23:16 crc kubenswrapper[5023]: I0219 08:23:16.692946 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.147:3000/\": EOF" Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.441005 5023 generic.go:334] "Generic (PLEG): container finished" podID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerID="dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961" exitCode=0 Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.441786 5023 generic.go:334] "Generic (PLEG): container finished" podID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerID="a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927" exitCode=2 Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.441866 5023 generic.go:334] "Generic (PLEG): container finished" podID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerID="350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5" exitCode=0 Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.441051 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerDied","Data":"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961"} Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.442019 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerDied","Data":"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927"} Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.442061 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerDied","Data":"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5"} Feb 19 08:23:17 crc kubenswrapper[5023]: I0219 08:23:17.959420 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048476 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048531 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048556 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048603 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2xn\" (UniqueName: \"kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048663 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048683 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048776 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.048828 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs\") pod \"c150d099-6881-4e8d-9942-12909cbdf3b7\" (UID: \"c150d099-6881-4e8d-9942-12909cbdf3b7\") " Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.058673 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.061095 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.062161 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn" (OuterVolumeSpecName: "kube-api-access-kc2xn") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "kube-api-access-kc2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.072847 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts" (OuterVolumeSpecName: "scripts") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.130947 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.140903 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153242 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2hgbg"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153684 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153728 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153741 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2xn\" (UniqueName: \"kubernetes.io/projected/c150d099-6881-4e8d-9942-12909cbdf3b7-kube-api-access-kc2xn\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153755 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c150d099-6881-4e8d-9942-12909cbdf3b7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.153765 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.178766 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher5046-account-delete-zvtg5"] Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.179081 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-central-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179093 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-central-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.179111 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="sg-core" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179119 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="sg-core" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.179135 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="proxy-httpd" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179141 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="proxy-httpd" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.179152 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-notification-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179158 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-notification-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179302 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="sg-core" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179315 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-notification-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179331 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="ceilometer-central-agent" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179339 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerName="proxy-httpd" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.179909 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.183773 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.184531 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5046-account-delete-zvtg5"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.247949 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.251264 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.251823 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerName="watcher-applier" containerID="cri-o://f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" gracePeriod=30 Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.254957 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.254982 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.263786 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data" (OuterVolumeSpecName: "config-data") pod "c150d099-6881-4e8d-9942-12909cbdf3b7" (UID: "c150d099-6881-4e8d-9942-12909cbdf3b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.324592 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.324826 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-kuttl-api-log" containerID="cri-o://2601b31bc3770a4e0a92d1b396e74023d42e3cd7de3535441cfabd56a5398ad9" gracePeriod=30 Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.325203 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-api" containerID="cri-o://5e0fffb7c9bc56ea1c40371fc446d0ec7fe06d70da65223093384d3b9159ae44" gracePeriod=30 Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.347870 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.348080 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" containerName="watcher-decision-engine" containerID="cri-o://41cbf4f9f684597d24658deb5cb8747db79a167a786104bedcff7a93136138c8" gracePeriod=30 Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.357459 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlxz\" (UniqueName: \"kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.357570 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.357680 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c150d099-6881-4e8d-9942-12909cbdf3b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.461820 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.461906 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlxz\" (UniqueName: \"kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.462603 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.463203 5023 generic.go:334] "Generic (PLEG): container finished" podID="c150d099-6881-4e8d-9942-12909cbdf3b7" containerID="18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3" exitCode=0 Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.463240 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerDied","Data":"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3"} Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.463267 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c150d099-6881-4e8d-9942-12909cbdf3b7","Type":"ContainerDied","Data":"21751efb0bef0f3f0a6a3f36e91f1f1d44b18b520a7639774a8f92ed5a020449"} Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.463282 5023 scope.go:117] "RemoveContainer" containerID="dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.463436 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.491337 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlxz\" (UniqueName: \"kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz\") pod \"watcher5046-account-delete-zvtg5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.505760 5023 scope.go:117] "RemoveContainer" containerID="a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.518947 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.529669 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.532914 5023 scope.go:117] "RemoveContainer" containerID="18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.543220 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.555503 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.557598 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.562647 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.563485 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.563975 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.564765 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.607067 5023 scope.go:117] "RemoveContainer" containerID="350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.652582 5023 scope.go:117] "RemoveContainer" containerID="dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.660128 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961\": container with ID starting with dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961 not found: ID does not exist" containerID="dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.660176 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961"} err="failed to get container status \"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961\": rpc error: code = NotFound desc = could not find container \"dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961\": container with ID starting with dc500b2845db222efdd0459e472ef20d9565d9c3550d482d75d3fb5aa9870961 not found: ID does not exist" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.660205 5023 scope.go:117] "RemoveContainer" containerID="a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.662036 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927\": container with ID starting with a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927 not found: ID does not exist" containerID="a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.662063 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927"} err="failed to get container status \"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927\": rpc error: code = NotFound desc = could not find container \"a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927\": container with ID starting with a557c63d3e12d3c1449648443ad3b85268715ef3841170f14df3faeb5bd7e927 not found: ID does not exist" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.662078 5023 scope.go:117] "RemoveContainer" containerID="18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.662300 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3\": container with ID starting with 18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3 not found: ID does not exist" containerID="18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.662319 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3"} err="failed to get container status \"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3\": rpc error: code = NotFound desc = could not find container \"18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3\": container with ID starting with 18ef344f3a2075455b0990a0c3f3299855ebd2d672884dbafe04ffda42e7daa3 not found: ID does not exist" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.662333 5023 scope.go:117] "RemoveContainer" containerID="350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5" Feb 19 08:23:18 crc kubenswrapper[5023]: E0219 08:23:18.662549 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5\": container with ID starting with 350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5 not found: ID does not exist" containerID="350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.662570 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5"} err="failed to get container status \"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5\": rpc error: code = NotFound desc = could not find container \"350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5\": container with ID starting with 350509a80d3647030a7d584c570b35e5d9f63a1fcae123c4aa224baa4fc61de5 not found: ID does not exist" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666513 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666573 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666596 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666792 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54x4z\" (UniqueName: \"kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666832 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666856 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666875 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.666919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.776528 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.776972 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.776991 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777062 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54x4z\" (UniqueName: \"kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777088 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777109 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777118 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777129 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.777276 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.781432 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.792163 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.795413 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.797476 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.798003 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.808363 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.832562 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54x4z\" (UniqueName: \"kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z\") pod \"ceilometer-0\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:18 crc kubenswrapper[5023]: I0219 08:23:18.888164 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.033053 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5046-account-delete-zvtg5"] Feb 19 08:23:19 crc kubenswrapper[5023]: E0219 08:23:19.077537 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:19 crc kubenswrapper[5023]: E0219 08:23:19.082352 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:19 crc kubenswrapper[5023]: E0219 08:23:19.084522 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:19 crc kubenswrapper[5023]: E0219 08:23:19.084659 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerName="watcher-applier" Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.473374 5023 generic.go:334] "Generic (PLEG): container finished" podID="900ad5b2-02a2-48d2-9530-80afde725172" containerID="2601b31bc3770a4e0a92d1b396e74023d42e3cd7de3535441cfabd56a5398ad9" exitCode=143 Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.473670 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerDied","Data":"2601b31bc3770a4e0a92d1b396e74023d42e3cd7de3535441cfabd56a5398ad9"} Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.489883 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5f4de9-78af-454e-839d-3d21667acac2" path="/var/lib/kubelet/pods/4c5f4de9-78af-454e-839d-3d21667acac2/volumes" Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.490485 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c150d099-6881-4e8d-9942-12909cbdf3b7" path="/var/lib/kubelet/pods/c150d099-6881-4e8d-9942-12909cbdf3b7/volumes" Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.491132 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" event={"ID":"250476fa-e42c-44bc-8dab-e924f1693ef5","Type":"ContainerStarted","Data":"ddc00f0fa8076b068a184fb0e6ca440b1e8c4b770798c7f401c593486fdcde42"} Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.491162 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" event={"ID":"250476fa-e42c-44bc-8dab-e924f1693ef5","Type":"ContainerStarted","Data":"aabe7facb991224cb7c43d618fc26097dde48eca3c8501fefbe0e6d85bc77543"} Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.493344 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" podStartSLOduration=1.49333414 podStartE2EDuration="1.49333414s" podCreationTimestamp="2026-02-19 08:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:19.491199024 +0000 UTC m=+1357.148317992" watchObservedRunningTime="2026-02-19 08:23:19.49333414 +0000 UTC m=+1357.150453088" Feb 19 08:23:19 crc kubenswrapper[5023]: I0219 08:23:19.573123 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:19 crc kubenswrapper[5023]: W0219 08:23:19.577328 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158a244e_78b2_4abe_824d_d5253dc55e9f.slice/crio-02839b50fb3ab08e2824cb621a16124a40ae6bfb8f34d675573cf32d5b4e2675 WatchSource:0}: Error finding container 02839b50fb3ab08e2824cb621a16124a40ae6bfb8f34d675573cf32d5b4e2675: Status 404 returned error can't find the container with id 02839b50fb3ab08e2824cb621a16124a40ae6bfb8f34d675573cf32d5b4e2675 Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.295828 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:48306->10.217.0.149:9322: read: connection reset by peer" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.295933 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:48308->10.217.0.149:9322: read: connection reset by peer" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.488921 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerStarted","Data":"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd"} Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.488999 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerStarted","Data":"02839b50fb3ab08e2824cb621a16124a40ae6bfb8f34d675573cf32d5b4e2675"} Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.492381 5023 generic.go:334] "Generic (PLEG): container finished" podID="250476fa-e42c-44bc-8dab-e924f1693ef5" containerID="ddc00f0fa8076b068a184fb0e6ca440b1e8c4b770798c7f401c593486fdcde42" exitCode=0 Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.492567 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" event={"ID":"250476fa-e42c-44bc-8dab-e924f1693ef5","Type":"ContainerDied","Data":"ddc00f0fa8076b068a184fb0e6ca440b1e8c4b770798c7f401c593486fdcde42"} Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.508793 5023 generic.go:334] "Generic (PLEG): container finished" podID="900ad5b2-02a2-48d2-9530-80afde725172" containerID="5e0fffb7c9bc56ea1c40371fc446d0ec7fe06d70da65223093384d3b9159ae44" exitCode=0 Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.508844 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerDied","Data":"5e0fffb7c9bc56ea1c40371fc446d0ec7fe06d70da65223093384d3b9159ae44"} Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.636303 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.712766 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca\") pod \"900ad5b2-02a2-48d2-9530-80afde725172\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.713365 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs\") pod \"900ad5b2-02a2-48d2-9530-80afde725172\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.713521 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle\") pod \"900ad5b2-02a2-48d2-9530-80afde725172\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.713773 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data\") pod \"900ad5b2-02a2-48d2-9530-80afde725172\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.713854 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4f7l\" (UniqueName: \"kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l\") pod \"900ad5b2-02a2-48d2-9530-80afde725172\" (UID: \"900ad5b2-02a2-48d2-9530-80afde725172\") " Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.715061 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs" (OuterVolumeSpecName: "logs") pod "900ad5b2-02a2-48d2-9530-80afde725172" (UID: "900ad5b2-02a2-48d2-9530-80afde725172"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.720292 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l" (OuterVolumeSpecName: "kube-api-access-p4f7l") pod "900ad5b2-02a2-48d2-9530-80afde725172" (UID: "900ad5b2-02a2-48d2-9530-80afde725172"). InnerVolumeSpecName "kube-api-access-p4f7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.739920 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "900ad5b2-02a2-48d2-9530-80afde725172" (UID: "900ad5b2-02a2-48d2-9530-80afde725172"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.764957 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900ad5b2-02a2-48d2-9530-80afde725172" (UID: "900ad5b2-02a2-48d2-9530-80afde725172"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.776107 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data" (OuterVolumeSpecName: "config-data") pod "900ad5b2-02a2-48d2-9530-80afde725172" (UID: "900ad5b2-02a2-48d2-9530-80afde725172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.817499 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.817540 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900ad5b2-02a2-48d2-9530-80afde725172-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.817551 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.817559 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900ad5b2-02a2-48d2-9530-80afde725172-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.817567 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4f7l\" (UniqueName: \"kubernetes.io/projected/900ad5b2-02a2-48d2-9530-80afde725172-kube-api-access-p4f7l\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:20 crc kubenswrapper[5023]: I0219 08:23:20.869181 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.545105 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerStarted","Data":"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f"} Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.560198 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"900ad5b2-02a2-48d2-9530-80afde725172","Type":"ContainerDied","Data":"2f42683be3e3364aff61f757609dfa52a045ec16086e90c0faef7c3babfa646e"} Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.560252 5023 scope.go:117] "RemoveContainer" containerID="5e0fffb7c9bc56ea1c40371fc446d0ec7fe06d70da65223093384d3b9159ae44" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.560386 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.571990 5023 generic.go:334] "Generic (PLEG): container finished" podID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerID="f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" exitCode=0 Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.572255 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5adb8213-1e84-471c-952e-11abe1f09ff8","Type":"ContainerDied","Data":"f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85"} Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.674778 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.674856 5023 scope.go:117] "RemoveContainer" containerID="2601b31bc3770a4e0a92d1b396e74023d42e3cd7de3535441cfabd56a5398ad9" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.736470 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8mhq\" (UniqueName: \"kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq\") pod \"5adb8213-1e84-471c-952e-11abe1f09ff8\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.736648 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle\") pod \"5adb8213-1e84-471c-952e-11abe1f09ff8\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.736698 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs\") pod \"5adb8213-1e84-471c-952e-11abe1f09ff8\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.736758 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data\") pod \"5adb8213-1e84-471c-952e-11abe1f09ff8\" (UID: \"5adb8213-1e84-471c-952e-11abe1f09ff8\") " Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.746812 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs" (OuterVolumeSpecName: "logs") pod "5adb8213-1e84-471c-952e-11abe1f09ff8" (UID: "5adb8213-1e84-471c-952e-11abe1f09ff8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.754979 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq" (OuterVolumeSpecName: "kube-api-access-x8mhq") pod "5adb8213-1e84-471c-952e-11abe1f09ff8" (UID: "5adb8213-1e84-471c-952e-11abe1f09ff8"). InnerVolumeSpecName "kube-api-access-x8mhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.755104 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.774138 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.775956 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5adb8213-1e84-471c-952e-11abe1f09ff8" (UID: "5adb8213-1e84-471c-952e-11abe1f09ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.814764 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data" (OuterVolumeSpecName: "config-data") pod "5adb8213-1e84-471c-952e-11abe1f09ff8" (UID: "5adb8213-1e84-471c-952e-11abe1f09ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.839711 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.840331 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adb8213-1e84-471c-952e-11abe1f09ff8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.840346 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adb8213-1e84-471c-952e-11abe1f09ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:21 crc kubenswrapper[5023]: I0219 08:23:21.840355 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8mhq\" (UniqueName: \"kubernetes.io/projected/5adb8213-1e84-471c-952e-11abe1f09ff8-kube-api-access-x8mhq\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.003385 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.146107 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlxz\" (UniqueName: \"kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz\") pod \"250476fa-e42c-44bc-8dab-e924f1693ef5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.146196 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts\") pod \"250476fa-e42c-44bc-8dab-e924f1693ef5\" (UID: \"250476fa-e42c-44bc-8dab-e924f1693ef5\") " Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.151326 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz" (OuterVolumeSpecName: "kube-api-access-sqlxz") pod "250476fa-e42c-44bc-8dab-e924f1693ef5" (UID: "250476fa-e42c-44bc-8dab-e924f1693ef5"). InnerVolumeSpecName "kube-api-access-sqlxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.151935 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "250476fa-e42c-44bc-8dab-e924f1693ef5" (UID: "250476fa-e42c-44bc-8dab-e924f1693ef5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.248447 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlxz\" (UniqueName: \"kubernetes.io/projected/250476fa-e42c-44bc-8dab-e924f1693ef5-kube-api-access-sqlxz\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.248485 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250476fa-e42c-44bc-8dab-e924f1693ef5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.589357 5023 generic.go:334] "Generic (PLEG): container finished" podID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" containerID="41cbf4f9f684597d24658deb5cb8747db79a167a786104bedcff7a93136138c8" exitCode=0 Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.589420 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0","Type":"ContainerDied","Data":"41cbf4f9f684597d24658deb5cb8747db79a167a786104bedcff7a93136138c8"} Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.590672 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5adb8213-1e84-471c-952e-11abe1f09ff8","Type":"ContainerDied","Data":"8f24a39db071a046d4509ae7bf44c1f9b51234a796c6b59b6ab8c9ab4788025c"} Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.590702 5023 scope.go:117] "RemoveContainer" containerID="f006022c3c57397baf45d56dbab6cfe760fe9899a8e873a2441bb96416b30e85" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.590788 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.633879 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerStarted","Data":"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5"} Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.638536 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" event={"ID":"250476fa-e42c-44bc-8dab-e924f1693ef5","Type":"ContainerDied","Data":"aabe7facb991224cb7c43d618fc26097dde48eca3c8501fefbe0e6d85bc77543"} Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.638561 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aabe7facb991224cb7c43d618fc26097dde48eca3c8501fefbe0e6d85bc77543" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.638636 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5046-account-delete-zvtg5" Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.721658 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.764133 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:22 crc kubenswrapper[5023]: I0219 08:23:22.997553 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066400 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rs6x\" (UniqueName: \"kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x\") pod \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066557 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data\") pod \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066704 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs\") pod \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066746 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle\") pod \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066791 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca\") pod \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\" (UID: \"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0\") " Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.066919 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs" (OuterVolumeSpecName: "logs") pod "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" (UID: "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.067161 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.083391 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x" (OuterVolumeSpecName: "kube-api-access-7rs6x") pod "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" (UID: "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0"). InnerVolumeSpecName "kube-api-access-7rs6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.088421 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" (UID: "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.089146 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" (UID: "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.110305 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data" (OuterVolumeSpecName: "config-data") pod "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" (UID: "d00d9f4f-c619-4df9-b9da-3ffe552c6bf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.167107 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lj4xm"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.168213 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.168235 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.168247 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rs6x\" (UniqueName: \"kubernetes.io/projected/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-kube-api-access-7rs6x\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.168259 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.176983 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lj4xm"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.190638 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher5046-account-delete-zvtg5"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.199572 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-5046-account-create-update-njk7n"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.206301 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher5046-account-delete-zvtg5"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.212981 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-5046-account-create-update-njk7n"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.487204 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250476fa-e42c-44bc-8dab-e924f1693ef5" path="/var/lib/kubelet/pods/250476fa-e42c-44bc-8dab-e924f1693ef5/volumes" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.488525 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" path="/var/lib/kubelet/pods/5adb8213-1e84-471c-952e-11abe1f09ff8/volumes" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.489026 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5b9f83-8d00-411b-83dc-bcf3872c3451" path="/var/lib/kubelet/pods/6a5b9f83-8d00-411b-83dc-bcf3872c3451/volumes" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.490022 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8a723e-a22d-4601-a71c-c9145b58da3a" path="/var/lib/kubelet/pods/8b8a723e-a22d-4601-a71c-c9145b58da3a/volumes" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.490520 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900ad5b2-02a2-48d2-9530-80afde725172" path="/var/lib/kubelet/pods/900ad5b2-02a2-48d2-9530-80afde725172/volumes" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.650210 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerStarted","Data":"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567"} Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.650472 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-central-agent" containerID="cri-o://a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd" gracePeriod=30 Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.650582 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-notification-agent" containerID="cri-o://a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f" gracePeriod=30 Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.650629 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="proxy-httpd" containerID="cri-o://37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567" gracePeriod=30 Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.650828 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="sg-core" containerID="cri-o://a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5" gracePeriod=30 Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.652270 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"d00d9f4f-c619-4df9-b9da-3ffe552c6bf0","Type":"ContainerDied","Data":"c9066be5062b2eda3180e88ff29ac9c1ccc86c8bd6dae4254f59a9bed9ae203c"} Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.652319 5023 scope.go:117] "RemoveContainer" containerID="41cbf4f9f684597d24658deb5cb8747db79a167a786104bedcff7a93136138c8" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.652340 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.676008 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.052740041 podStartE2EDuration="5.675988531s" podCreationTimestamp="2026-02-19 08:23:18 +0000 UTC" firstStartedPulling="2026-02-19 08:23:19.580015 +0000 UTC m=+1357.237133948" lastFinishedPulling="2026-02-19 08:23:23.20326349 +0000 UTC m=+1360.860382438" observedRunningTime="2026-02-19 08:23:23.67217153 +0000 UTC m=+1361.329290478" watchObservedRunningTime="2026-02-19 08:23:23.675988531 +0000 UTC m=+1361.333107469" Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.695885 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:23 crc kubenswrapper[5023]: I0219 08:23:23.704095 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404393 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-h4c45"] Feb 19 08:23:24 crc kubenswrapper[5023]: E0219 08:23:24.404734 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-api" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404749 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-api" Feb 19 08:23:24 crc kubenswrapper[5023]: E0219 08:23:24.404762 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerName="watcher-applier" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404768 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerName="watcher-applier" Feb 19 08:23:24 crc kubenswrapper[5023]: E0219 08:23:24.404792 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250476fa-e42c-44bc-8dab-e924f1693ef5" containerName="mariadb-account-delete" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404798 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="250476fa-e42c-44bc-8dab-e924f1693ef5" containerName="mariadb-account-delete" Feb 19 08:23:24 crc kubenswrapper[5023]: E0219 08:23:24.404807 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" containerName="watcher-decision-engine" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404813 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" containerName="watcher-decision-engine" Feb 19 08:23:24 crc kubenswrapper[5023]: E0219 08:23:24.404823 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-kuttl-api-log" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404829 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-kuttl-api-log" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404967 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adb8213-1e84-471c-952e-11abe1f09ff8" containerName="watcher-applier" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404976 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-api" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404988 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" containerName="watcher-decision-engine" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.404995 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="900ad5b2-02a2-48d2-9530-80afde725172" containerName="watcher-kuttl-api-log" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.405003 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="250476fa-e42c-44bc-8dab-e924f1693ef5" containerName="mariadb-account-delete" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.405528 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.417229 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc"] Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.418373 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.420920 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.453302 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc"] Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.459218 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-h4c45"] Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.489076 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74nt\" (UniqueName: \"kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.489130 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.489188 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.489224 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzmm\" (UniqueName: \"kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.591132 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74nt\" (UniqueName: \"kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.591192 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.591251 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.591285 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzmm\" (UniqueName: \"kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.593399 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.594073 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.612445 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzmm\" (UniqueName: \"kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm\") pod \"watcher-2ec5-account-create-update-qxtbc\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.616562 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74nt\" (UniqueName: \"kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt\") pod \"watcher-db-create-h4c45\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.664089 5023 generic.go:334] "Generic (PLEG): container finished" podID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerID="37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567" exitCode=0 Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.664893 5023 generic.go:334] "Generic (PLEG): container finished" podID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerID="a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5" exitCode=2 Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.664985 5023 generic.go:334] "Generic (PLEG): container finished" podID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerID="a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f" exitCode=0 Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.664372 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerDied","Data":"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567"} Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.665161 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerDied","Data":"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5"} Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.665242 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerDied","Data":"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f"} Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.719109 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:24 crc kubenswrapper[5023]: I0219 08:23:24.730199 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.264173 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-h4c45"] Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.359834 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc"] Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.477324 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.485839 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00d9f4f-c619-4df9-b9da-3ffe552c6bf0" path="/var/lib/kubelet/pods/d00d9f4f-c619-4df9-b9da-3ffe552c6bf0/volumes" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.622595 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.622701 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54x4z\" (UniqueName: \"kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.622743 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.622817 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.622951 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.623010 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.623140 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.623195 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data\") pod \"158a244e-78b2-4abe-824d-d5253dc55e9f\" (UID: \"158a244e-78b2-4abe-824d-d5253dc55e9f\") " Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.630380 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts" (OuterVolumeSpecName: "scripts") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.630829 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.630870 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.632778 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z" (OuterVolumeSpecName: "kube-api-access-54x4z") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "kube-api-access-54x4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.668929 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.682427 5023 generic.go:334] "Generic (PLEG): container finished" podID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerID="a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd" exitCode=0 Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.682506 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerDied","Data":"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.682544 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"158a244e-78b2-4abe-824d-d5253dc55e9f","Type":"ContainerDied","Data":"02839b50fb3ab08e2824cb621a16124a40ae6bfb8f34d675573cf32d5b4e2675"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.682566 5023 scope.go:117] "RemoveContainer" containerID="37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.682742 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.692488 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" event={"ID":"340c7939-dc28-4472-865f-09566ccb8e37","Type":"ContainerStarted","Data":"b6e65b0db983b478841c12b14b6e0e191c4fea7fc8070568c3456c9690ceb8b4"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.692535 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" event={"ID":"340c7939-dc28-4472-865f-09566ccb8e37","Type":"ContainerStarted","Data":"9a75b65290d96bc22afe3ff6c84cc4f1ce9dd2fc46348c39541b4ce43564a7a9"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.728807 5023 scope.go:117] "RemoveContainer" containerID="a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.730657 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.730677 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54x4z\" (UniqueName: \"kubernetes.io/projected/158a244e-78b2-4abe-824d-d5253dc55e9f-kube-api-access-54x4z\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.735545 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.735569 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.735579 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/158a244e-78b2-4abe-824d-d5253dc55e9f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.735663 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-h4c45" event={"ID":"41ebba87-5d6e-4158-a4d7-e5232469601a","Type":"ContainerStarted","Data":"7814f4b666bdd2c0c72ae7f9c4a660b6b6a090f6be60f4868ab2400199525930"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.735813 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-h4c45" event={"ID":"41ebba87-5d6e-4158-a4d7-e5232469601a","Type":"ContainerStarted","Data":"2f091bf09cc4969deb48a71ccb663a4e0330a322c8aa7009e020165c3c2d20f5"} Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.738294 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" podStartSLOduration=1.738276442 podStartE2EDuration="1.738276442s" podCreationTimestamp="2026-02-19 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:25.729220462 +0000 UTC m=+1363.386339410" watchObservedRunningTime="2026-02-19 08:23:25.738276442 +0000 UTC m=+1363.395395390" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.773609 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-h4c45" podStartSLOduration=1.773587969 podStartE2EDuration="1.773587969s" podCreationTimestamp="2026-02-19 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:25.762060933 +0000 UTC m=+1363.419179891" watchObservedRunningTime="2026-02-19 08:23:25.773587969 +0000 UTC m=+1363.430706917" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.775156 5023 scope.go:117] "RemoveContainer" containerID="a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.808417 5023 scope.go:117] "RemoveContainer" containerID="a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.850893 5023 scope.go:117] "RemoveContainer" containerID="37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567" Feb 19 08:23:25 crc kubenswrapper[5023]: E0219 08:23:25.855120 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567\": container with ID starting with 37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567 not found: ID does not exist" containerID="37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.855864 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567"} err="failed to get container status \"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567\": rpc error: code = NotFound desc = could not find container \"37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567\": container with ID starting with 37807e5d6a7e1673c0c986d66b014aae10cc0b6f9b2b77f822f0230670185567 not found: ID does not exist" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.855906 5023 scope.go:117] "RemoveContainer" containerID="a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5" Feb 19 08:23:25 crc kubenswrapper[5023]: E0219 08:23:25.859020 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5\": container with ID starting with a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5 not found: ID does not exist" containerID="a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.859087 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5"} err="failed to get container status \"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5\": rpc error: code = NotFound desc = could not find container \"a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5\": container with ID starting with a86c30d7a85131c1f6f6b3ade7ed1cbe7049781bd6e909a4d8f561559e7febe5 not found: ID does not exist" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.859157 5023 scope.go:117] "RemoveContainer" containerID="a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f" Feb 19 08:23:25 crc kubenswrapper[5023]: E0219 08:23:25.862988 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f\": container with ID starting with a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f not found: ID does not exist" containerID="a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.863067 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f"} err="failed to get container status \"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f\": rpc error: code = NotFound desc = could not find container \"a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f\": container with ID starting with a98cfdd1f3e3c26606dc8fe013a107cd2de259beb33848cd7e786cab1565641f not found: ID does not exist" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.863089 5023 scope.go:117] "RemoveContainer" containerID="a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd" Feb 19 08:23:25 crc kubenswrapper[5023]: E0219 08:23:25.866910 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd\": container with ID starting with a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd not found: ID does not exist" containerID="a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.866932 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd"} err="failed to get container status \"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd\": rpc error: code = NotFound desc = could not find container \"a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd\": container with ID starting with a2147ce0c131f1fb712f64877f03ab37f32d329e2832439f0704ea7cf87dd5fd not found: ID does not exist" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.880801 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.939414 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.939857 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:25 crc kubenswrapper[5023]: I0219 08:23:25.975792 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data" (OuterVolumeSpecName: "config-data") pod "158a244e-78b2-4abe-824d-d5253dc55e9f" (UID: "158a244e-78b2-4abe-824d-d5253dc55e9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.035885 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.040518 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.040543 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158a244e-78b2-4abe-824d-d5253dc55e9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.043907 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.062838 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:26 crc kubenswrapper[5023]: E0219 08:23:26.063178 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="proxy-httpd" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063196 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="proxy-httpd" Feb 19 08:23:26 crc kubenswrapper[5023]: E0219 08:23:26.063212 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-central-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063218 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-central-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: E0219 08:23:26.063235 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-notification-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063242 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-notification-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: E0219 08:23:26.063265 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="sg-core" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063275 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="sg-core" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063431 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-notification-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063449 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="ceilometer-central-agent" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063456 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="proxy-httpd" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.063474 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" containerName="sg-core" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.064938 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.069307 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.069489 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.069599 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.079488 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141588 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mx2\" (UniqueName: \"kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141672 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141727 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141830 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141856 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141885 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.141949 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.246162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mx2\" (UniqueName: \"kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.246290 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.246377 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.247670 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.247746 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.248408 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.248988 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.249056 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.249127 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.249663 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.251485 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.252233 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.252531 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.254092 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.263842 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mx2\" (UniqueName: \"kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.264816 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data\") pod \"ceilometer-0\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.322697 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.745607 5023 generic.go:334] "Generic (PLEG): container finished" podID="340c7939-dc28-4472-865f-09566ccb8e37" containerID="b6e65b0db983b478841c12b14b6e0e191c4fea7fc8070568c3456c9690ceb8b4" exitCode=0 Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.745706 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" event={"ID":"340c7939-dc28-4472-865f-09566ccb8e37","Type":"ContainerDied","Data":"b6e65b0db983b478841c12b14b6e0e191c4fea7fc8070568c3456c9690ceb8b4"} Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.748114 5023 generic.go:334] "Generic (PLEG): container finished" podID="41ebba87-5d6e-4158-a4d7-e5232469601a" containerID="7814f4b666bdd2c0c72ae7f9c4a660b6b6a090f6be60f4868ab2400199525930" exitCode=0 Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.748162 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-h4c45" event={"ID":"41ebba87-5d6e-4158-a4d7-e5232469601a","Type":"ContainerDied","Data":"7814f4b666bdd2c0c72ae7f9c4a660b6b6a090f6be60f4868ab2400199525930"} Feb 19 08:23:26 crc kubenswrapper[5023]: I0219 08:23:26.810590 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:26 crc kubenswrapper[5023]: W0219 08:23:26.817787 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48974a36_e692_4aae_911f_d1c55886e393.slice/crio-c679210a34e498c4ecf76e61852498070aa6ba44376f4d8236fcab55d2ded5e8 WatchSource:0}: Error finding container c679210a34e498c4ecf76e61852498070aa6ba44376f4d8236fcab55d2ded5e8: Status 404 returned error can't find the container with id c679210a34e498c4ecf76e61852498070aa6ba44376f4d8236fcab55d2ded5e8 Feb 19 08:23:27 crc kubenswrapper[5023]: I0219 08:23:27.503640 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158a244e-78b2-4abe-824d-d5253dc55e9f" path="/var/lib/kubelet/pods/158a244e-78b2-4abe-824d-d5253dc55e9f/volumes" Feb 19 08:23:27 crc kubenswrapper[5023]: I0219 08:23:27.757689 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerStarted","Data":"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552"} Feb 19 08:23:27 crc kubenswrapper[5023]: I0219 08:23:27.757730 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerStarted","Data":"c679210a34e498c4ecf76e61852498070aa6ba44376f4d8236fcab55d2ded5e8"} Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.259373 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.273826 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.386432 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts\") pod \"340c7939-dc28-4472-865f-09566ccb8e37\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.386507 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts\") pod \"41ebba87-5d6e-4158-a4d7-e5232469601a\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.386562 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t74nt\" (UniqueName: \"kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt\") pod \"41ebba87-5d6e-4158-a4d7-e5232469601a\" (UID: \"41ebba87-5d6e-4158-a4d7-e5232469601a\") " Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.386589 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzmm\" (UniqueName: \"kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm\") pod \"340c7939-dc28-4472-865f-09566ccb8e37\" (UID: \"340c7939-dc28-4472-865f-09566ccb8e37\") " Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.387127 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "340c7939-dc28-4472-865f-09566ccb8e37" (UID: "340c7939-dc28-4472-865f-09566ccb8e37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.387207 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41ebba87-5d6e-4158-a4d7-e5232469601a" (UID: "41ebba87-5d6e-4158-a4d7-e5232469601a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.387799 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340c7939-dc28-4472-865f-09566ccb8e37-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.387818 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41ebba87-5d6e-4158-a4d7-e5232469601a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.390280 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm" (OuterVolumeSpecName: "kube-api-access-rjzmm") pod "340c7939-dc28-4472-865f-09566ccb8e37" (UID: "340c7939-dc28-4472-865f-09566ccb8e37"). InnerVolumeSpecName "kube-api-access-rjzmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.403984 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt" (OuterVolumeSpecName: "kube-api-access-t74nt") pod "41ebba87-5d6e-4158-a4d7-e5232469601a" (UID: "41ebba87-5d6e-4158-a4d7-e5232469601a"). InnerVolumeSpecName "kube-api-access-t74nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.489776 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t74nt\" (UniqueName: \"kubernetes.io/projected/41ebba87-5d6e-4158-a4d7-e5232469601a-kube-api-access-t74nt\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.489818 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzmm\" (UniqueName: \"kubernetes.io/projected/340c7939-dc28-4472-865f-09566ccb8e37-kube-api-access-rjzmm\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.788876 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-h4c45" event={"ID":"41ebba87-5d6e-4158-a4d7-e5232469601a","Type":"ContainerDied","Data":"2f091bf09cc4969deb48a71ccb663a4e0330a322c8aa7009e020165c3c2d20f5"} Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.788926 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f091bf09cc4969deb48a71ccb663a4e0330a322c8aa7009e020165c3c2d20f5" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.788988 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-h4c45" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.800223 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerStarted","Data":"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9"} Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.800297 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerStarted","Data":"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354"} Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.803357 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.803692 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc" event={"ID":"340c7939-dc28-4472-865f-09566ccb8e37","Type":"ContainerDied","Data":"9a75b65290d96bc22afe3ff6c84cc4f1ce9dd2fc46348c39541b4ce43564a7a9"} Feb 19 08:23:28 crc kubenswrapper[5023]: I0219 08:23:28.803777 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a75b65290d96bc22afe3ff6c84cc4f1ce9dd2fc46348c39541b4ce43564a7a9" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.665544 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw"] Feb 19 08:23:29 crc kubenswrapper[5023]: E0219 08:23:29.665858 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ebba87-5d6e-4158-a4d7-e5232469601a" containerName="mariadb-database-create" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.665871 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ebba87-5d6e-4158-a4d7-e5232469601a" containerName="mariadb-database-create" Feb 19 08:23:29 crc kubenswrapper[5023]: E0219 08:23:29.665902 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c7939-dc28-4472-865f-09566ccb8e37" containerName="mariadb-account-create-update" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.665909 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c7939-dc28-4472-865f-09566ccb8e37" containerName="mariadb-account-create-update" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.666057 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ebba87-5d6e-4158-a4d7-e5232469601a" containerName="mariadb-database-create" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.666082 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="340c7939-dc28-4472-865f-09566ccb8e37" containerName="mariadb-account-create-update" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.666633 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.670339 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.670584 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nmk5v" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.688562 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw"] Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.811509 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.811644 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.811711 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxlp\" (UniqueName: \"kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.811749 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.913161 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.913238 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.913269 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxlp\" (UniqueName: \"kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.913331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.918972 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.919079 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.919448 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.944928 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxlp\" (UniqueName: \"kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp\") pod \"watcher-kuttl-db-sync-bl9zw\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:29 crc kubenswrapper[5023]: I0219 08:23:29.991416 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:30 crc kubenswrapper[5023]: W0219 08:23:30.497210 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0f1e41_657a_40f7_8d0c_62fce6a96905.slice/crio-b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510 WatchSource:0}: Error finding container b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510: Status 404 returned error can't find the container with id b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510 Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.498484 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw"] Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.823897 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerStarted","Data":"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8"} Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.824268 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.826122 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" event={"ID":"5e0f1e41-657a-40f7-8d0c-62fce6a96905","Type":"ContainerStarted","Data":"d0e0cc640f1883672496b2b7b98cb59f1bc10d053f6a836606886f7b12748483"} Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.826153 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" event={"ID":"5e0f1e41-657a-40f7-8d0c-62fce6a96905","Type":"ContainerStarted","Data":"b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510"} Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.845062 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.5443819159999999 podStartE2EDuration="4.845041008s" podCreationTimestamp="2026-02-19 08:23:26 +0000 UTC" firstStartedPulling="2026-02-19 08:23:26.82043693 +0000 UTC m=+1364.477555878" lastFinishedPulling="2026-02-19 08:23:30.121096022 +0000 UTC m=+1367.778214970" observedRunningTime="2026-02-19 08:23:30.843787904 +0000 UTC m=+1368.500906862" watchObservedRunningTime="2026-02-19 08:23:30.845041008 +0000 UTC m=+1368.502159956" Feb 19 08:23:30 crc kubenswrapper[5023]: I0219 08:23:30.865383 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" podStartSLOduration=1.8653628370000002 podStartE2EDuration="1.865362837s" podCreationTimestamp="2026-02-19 08:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:30.857374325 +0000 UTC m=+1368.514493273" watchObservedRunningTime="2026-02-19 08:23:30.865362837 +0000 UTC m=+1368.522481785" Feb 19 08:23:33 crc kubenswrapper[5023]: I0219 08:23:33.848553 5023 generic.go:334] "Generic (PLEG): container finished" podID="5e0f1e41-657a-40f7-8d0c-62fce6a96905" containerID="d0e0cc640f1883672496b2b7b98cb59f1bc10d053f6a836606886f7b12748483" exitCode=0 Feb 19 08:23:33 crc kubenswrapper[5023]: I0219 08:23:33.848665 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" event={"ID":"5e0f1e41-657a-40f7-8d0c-62fce6a96905","Type":"ContainerDied","Data":"d0e0cc640f1883672496b2b7b98cb59f1bc10d053f6a836606886f7b12748483"} Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.236129 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.344267 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data\") pod \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.345263 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data\") pod \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.345409 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle\") pod \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.345738 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxlp\" (UniqueName: \"kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp\") pod \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\" (UID: \"5e0f1e41-657a-40f7-8d0c-62fce6a96905\") " Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.348933 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5e0f1e41-657a-40f7-8d0c-62fce6a96905" (UID: "5e0f1e41-657a-40f7-8d0c-62fce6a96905"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.353029 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp" (OuterVolumeSpecName: "kube-api-access-sgxlp") pod "5e0f1e41-657a-40f7-8d0c-62fce6a96905" (UID: "5e0f1e41-657a-40f7-8d0c-62fce6a96905"). InnerVolumeSpecName "kube-api-access-sgxlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.370916 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0f1e41-657a-40f7-8d0c-62fce6a96905" (UID: "5e0f1e41-657a-40f7-8d0c-62fce6a96905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.393784 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data" (OuterVolumeSpecName: "config-data") pod "5e0f1e41-657a-40f7-8d0c-62fce6a96905" (UID: "5e0f1e41-657a-40f7-8d0c-62fce6a96905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.448250 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.448566 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.448580 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0f1e41-657a-40f7-8d0c-62fce6a96905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.448591 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxlp\" (UniqueName: \"kubernetes.io/projected/5e0f1e41-657a-40f7-8d0c-62fce6a96905-kube-api-access-sgxlp\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.866004 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" event={"ID":"5e0f1e41-657a-40f7-8d0c-62fce6a96905","Type":"ContainerDied","Data":"b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510"} Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.866041 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fe2febe5ff3a4846a4b2bc6b7c1692451d61885c26a848ca1aab73695b3510" Feb 19 08:23:35 crc kubenswrapper[5023]: I0219 08:23:35.866046 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.124254 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: E0219 08:23:36.124684 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0f1e41-657a-40f7-8d0c-62fce6a96905" containerName="watcher-kuttl-db-sync" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.124706 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0f1e41-657a-40f7-8d0c-62fce6a96905" containerName="watcher-kuttl-db-sync" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.124929 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0f1e41-657a-40f7-8d0c-62fce6a96905" containerName="watcher-kuttl-db-sync" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.126005 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.127879 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nmk5v" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.134089 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.135199 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.138148 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.139032 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.140727 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.149795 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.217523 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.218575 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.225119 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.232848 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.265746 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.265792 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ch24\" (UniqueName: \"kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.265834 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.265853 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.265869 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5gw\" (UniqueName: \"kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.266081 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.266152 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.266205 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.266481 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368143 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368230 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368354 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368403 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ch24\" (UniqueName: \"kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368454 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmkz\" (UniqueName: \"kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368472 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368491 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5gw\" (UniqueName: \"kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368507 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368539 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368562 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.368598 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.369024 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.369058 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.369197 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.369240 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.369321 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.372821 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.372995 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.374504 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.375452 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.379382 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.384522 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5gw\" (UniqueName: \"kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw\") pod \"watcher-kuttl-api-0\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.384766 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ch24\" (UniqueName: \"kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24\") pod \"watcher-kuttl-applier-0\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.446327 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.470614 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmkz\" (UniqueName: \"kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.470710 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.470757 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.470789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.470836 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.471405 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.471640 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.474807 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.475125 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.476070 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.491118 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmkz\" (UniqueName: \"kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.536249 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.895131 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: I0219 08:23:36.975255 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:36 crc kubenswrapper[5023]: W0219 08:23:36.978637 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a4f58f_fe8c_4fec_9b67_6ba1d3a00da2.slice/crio-26d874da9bcffb80ada89f66b64be747199992a3a58cec75cd0bd43ac09f0677 WatchSource:0}: Error finding container 26d874da9bcffb80ada89f66b64be747199992a3a58cec75cd0bd43ac09f0677: Status 404 returned error can't find the container with id 26d874da9bcffb80ada89f66b64be747199992a3a58cec75cd0bd43ac09f0677 Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.055752 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.884334 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"df3f80bb-8d32-49ed-9ab3-89586fe20cb4","Type":"ContainerStarted","Data":"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.884636 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"df3f80bb-8d32-49ed-9ab3-89586fe20cb4","Type":"ContainerStarted","Data":"695ad57f7355ac622ff3f7a8ad3eef818c6b148103edf6cc9bd00f494fbccdac"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.885519 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerStarted","Data":"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.885539 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerStarted","Data":"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.885548 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerStarted","Data":"bac575a1de8db390b8c115313e111582e17bbc4c374070f5d1f5b96a2b0f8c05"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.886018 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.887110 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2","Type":"ContainerStarted","Data":"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.887155 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2","Type":"ContainerStarted","Data":"26d874da9bcffb80ada89f66b64be747199992a3a58cec75cd0bd43ac09f0677"} Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.907908 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.907891766 podStartE2EDuration="1.907891766s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:37.900977063 +0000 UTC m=+1375.558096031" watchObservedRunningTime="2026-02-19 08:23:37.907891766 +0000 UTC m=+1375.565010714" Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.940545 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.9405268420000001 podStartE2EDuration="1.940526842s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:37.937379519 +0000 UTC m=+1375.594498467" watchObservedRunningTime="2026-02-19 08:23:37.940526842 +0000 UTC m=+1375.597645790" Feb 19 08:23:37 crc kubenswrapper[5023]: I0219 08:23:37.941019 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.941012095 podStartE2EDuration="1.941012095s" podCreationTimestamp="2026-02-19 08:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:37.92311419 +0000 UTC m=+1375.580233138" watchObservedRunningTime="2026-02-19 08:23:37.941012095 +0000 UTC m=+1375.598131043" Feb 19 08:23:40 crc kubenswrapper[5023]: I0219 08:23:40.144672 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:41 crc kubenswrapper[5023]: I0219 08:23:41.446957 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:41 crc kubenswrapper[5023]: I0219 08:23:41.471798 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.447403 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.465651 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.472416 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.510385 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.537335 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.573546 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.958655 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.961761 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.984684 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:46 crc kubenswrapper[5023]: I0219 08:23:46.986484 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.012701 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.021346 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bl9zw"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.057836 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher2ec5-account-delete-27w96"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.059329 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.083976 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2ec5-account-delete-27w96"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.090922 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.112017 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfff\" (UniqueName: \"kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.112153 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.151597 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.151888 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-central-agent" containerID="cri-o://6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.152266 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="proxy-httpd" containerID="cri-o://ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.152313 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="sg-core" containerID="cri-o://6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.152351 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-notification-agent" containerID="cri-o://df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.161942 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.156:3000/\": EOF" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.179753 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.179988 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-kuttl-api-log" containerID="cri-o://4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.180062 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-api" containerID="cri-o://39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.204351 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.204593 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerName="watcher-applier" containerID="cri-o://e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.213757 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.213854 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfff\" (UniqueName: \"kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.214898 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.236952 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfff\" (UniqueName: \"kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff\") pod \"watcher2ec5-account-delete-27w96\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.384778 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.490744 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0f1e41-657a-40f7-8d0c-62fce6a96905" path="/var/lib/kubelet/pods/5e0f1e41-657a-40f7-8d0c-62fce6a96905/volumes" Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.872496 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2ec5-account-delete-27w96"] Feb 19 08:23:49 crc kubenswrapper[5023]: W0219 08:23:49.900740 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0232f604_3236_4b62_890c_b39536d0c413.slice/crio-5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc WatchSource:0}: Error finding container 5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc: Status 404 returned error can't find the container with id 5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.987240 5023 generic.go:334] "Generic (PLEG): container finished" podID="60402bd7-073d-44dd-9655-59083be6b132" containerID="4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c" exitCode=143 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.987325 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerDied","Data":"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c"} Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993802 5023 generic.go:334] "Generic (PLEG): container finished" podID="48974a36-e692-4aae-911f-d1c55886e393" containerID="ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8" exitCode=0 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993833 5023 generic.go:334] "Generic (PLEG): container finished" podID="48974a36-e692-4aae-911f-d1c55886e393" containerID="6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9" exitCode=2 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993840 5023 generic.go:334] "Generic (PLEG): container finished" podID="48974a36-e692-4aae-911f-d1c55886e393" containerID="6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552" exitCode=0 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993913 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerDied","Data":"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8"} Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993945 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerDied","Data":"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9"} Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.993956 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerDied","Data":"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552"} Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.996245 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" containerName="watcher-decision-engine" containerID="cri-o://a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080" gracePeriod=30 Feb 19 08:23:49 crc kubenswrapper[5023]: I0219 08:23:49.996278 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" event={"ID":"0232f604-3236-4b62-890c-b39536d0c413","Type":"ContainerStarted","Data":"5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc"} Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.820606 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.952659 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h5gw\" (UniqueName: \"kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw\") pod \"60402bd7-073d-44dd-9655-59083be6b132\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.953798 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle\") pod \"60402bd7-073d-44dd-9655-59083be6b132\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.953938 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs\") pod \"60402bd7-073d-44dd-9655-59083be6b132\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.954036 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data\") pod \"60402bd7-073d-44dd-9655-59083be6b132\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.954117 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca\") pod \"60402bd7-073d-44dd-9655-59083be6b132\" (UID: \"60402bd7-073d-44dd-9655-59083be6b132\") " Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.955320 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs" (OuterVolumeSpecName: "logs") pod "60402bd7-073d-44dd-9655-59083be6b132" (UID: "60402bd7-073d-44dd-9655-59083be6b132"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.960673 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw" (OuterVolumeSpecName: "kube-api-access-2h5gw") pod "60402bd7-073d-44dd-9655-59083be6b132" (UID: "60402bd7-073d-44dd-9655-59083be6b132"). InnerVolumeSpecName "kube-api-access-2h5gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:50 crc kubenswrapper[5023]: I0219 08:23:50.988829 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60402bd7-073d-44dd-9655-59083be6b132" (UID: "60402bd7-073d-44dd-9655-59083be6b132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.009682 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "60402bd7-073d-44dd-9655-59083be6b132" (UID: "60402bd7-073d-44dd-9655-59083be6b132"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.010518 5023 generic.go:334] "Generic (PLEG): container finished" podID="0232f604-3236-4b62-890c-b39536d0c413" containerID="95415aa5565d7eb4dede9db3b3910be8a304e6a497f69508fcd75cff277d8935" exitCode=0 Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.010595 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" event={"ID":"0232f604-3236-4b62-890c-b39536d0c413","Type":"ContainerDied","Data":"95415aa5565d7eb4dede9db3b3910be8a304e6a497f69508fcd75cff277d8935"} Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.017689 5023 generic.go:334] "Generic (PLEG): container finished" podID="60402bd7-073d-44dd-9655-59083be6b132" containerID="39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf" exitCode=0 Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.017735 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerDied","Data":"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf"} Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.017759 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"60402bd7-073d-44dd-9655-59083be6b132","Type":"ContainerDied","Data":"bac575a1de8db390b8c115313e111582e17bbc4c374070f5d1f5b96a2b0f8c05"} Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.017779 5023 scope.go:117] "RemoveContainer" containerID="39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.017966 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.040661 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data" (OuterVolumeSpecName: "config-data") pod "60402bd7-073d-44dd-9655-59083be6b132" (UID: "60402bd7-073d-44dd-9655-59083be6b132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.044235 5023 scope.go:117] "RemoveContainer" containerID="4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.055455 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.055488 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.055501 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h5gw\" (UniqueName: \"kubernetes.io/projected/60402bd7-073d-44dd-9655-59083be6b132-kube-api-access-2h5gw\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.055510 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60402bd7-073d-44dd-9655-59083be6b132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.055518 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60402bd7-073d-44dd-9655-59083be6b132-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.062427 5023 scope.go:117] "RemoveContainer" containerID="39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf" Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.062925 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf\": container with ID starting with 39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf not found: ID does not exist" containerID="39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.062965 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf"} err="failed to get container status \"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf\": rpc error: code = NotFound desc = could not find container \"39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf\": container with ID starting with 39adaa9e6db50c9b492316ea51d6766ccf0a30ff3a634b423f649d93af2771cf not found: ID does not exist" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.062992 5023 scope.go:117] "RemoveContainer" containerID="4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c" Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.063371 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c\": container with ID starting with 4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c not found: ID does not exist" containerID="4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.063414 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c"} err="failed to get container status \"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c\": rpc error: code = NotFound desc = could not find container \"4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c\": container with ID starting with 4b0efa79ad845a6e335dee25cc580a05e3bd33b3733f252d91d57367c544278c not found: ID does not exist" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.377151 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.388911 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.475472 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.477036 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.485089 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:23:51 crc kubenswrapper[5023]: E0219 08:23:51.485125 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerName="watcher-applier" Feb 19 08:23:51 crc kubenswrapper[5023]: I0219 08:23:51.490904 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60402bd7-073d-44dd-9655-59083be6b132" path="/var/lib/kubelet/pods/60402bd7-073d-44dd-9655-59083be6b132/volumes" Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.412945 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.533189 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts\") pod \"0232f604-3236-4b62-890c-b39536d0c413\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.533366 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfff\" (UniqueName: \"kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff\") pod \"0232f604-3236-4b62-890c-b39536d0c413\" (UID: \"0232f604-3236-4b62-890c-b39536d0c413\") " Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.533721 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0232f604-3236-4b62-890c-b39536d0c413" (UID: "0232f604-3236-4b62-890c-b39536d0c413"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.533826 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0232f604-3236-4b62-890c-b39536d0c413-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.537204 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff" (OuterVolumeSpecName: "kube-api-access-6dfff") pod "0232f604-3236-4b62-890c-b39536d0c413" (UID: "0232f604-3236-4b62-890c-b39536d0c413"). InnerVolumeSpecName "kube-api-access-6dfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:52 crc kubenswrapper[5023]: I0219 08:23:52.634932 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfff\" (UniqueName: \"kubernetes.io/projected/0232f604-3236-4b62-890c-b39536d0c413-kube-api-access-6dfff\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:53 crc kubenswrapper[5023]: I0219 08:23:53.037689 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" event={"ID":"0232f604-3236-4b62-890c-b39536d0c413","Type":"ContainerDied","Data":"5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc"} Feb 19 08:23:53 crc kubenswrapper[5023]: I0219 08:23:53.037729 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5637fc171ef241da721c2cc2dc24e3120da38f0fafd2ce15af3c11775ca480dc" Feb 19 08:23:53 crc kubenswrapper[5023]: I0219 08:23:53.037811 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2ec5-account-delete-27w96" Feb 19 08:23:53 crc kubenswrapper[5023]: I0219 08:23:53.879595 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.046533 5023 generic.go:334] "Generic (PLEG): container finished" podID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" exitCode=0 Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.046577 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2","Type":"ContainerDied","Data":"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773"} Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.046593 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.046608 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2","Type":"ContainerDied","Data":"26d874da9bcffb80ada89f66b64be747199992a3a58cec75cd0bd43ac09f0677"} Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.046635 5023 scope.go:117] "RemoveContainer" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.053245 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle\") pod \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.053350 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ch24\" (UniqueName: \"kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24\") pod \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.053421 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs\") pod \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.053528 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data\") pod \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\" (UID: \"04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.054514 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs" (OuterVolumeSpecName: "logs") pod "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" (UID: "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.072961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24" (OuterVolumeSpecName: "kube-api-access-7ch24") pod "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" (UID: "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2"). InnerVolumeSpecName "kube-api-access-7ch24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.085883 5023 scope.go:117] "RemoveContainer" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" Feb 19 08:23:54 crc kubenswrapper[5023]: E0219 08:23:54.087232 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773\": container with ID starting with e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773 not found: ID does not exist" containerID="e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.087267 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773"} err="failed to get container status \"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773\": rpc error: code = NotFound desc = could not find container \"e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773\": container with ID starting with e87068529e95ed12bfa3595e900876266e878e7861251dab18e294fa92c3e773 not found: ID does not exist" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.119229 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-h4c45"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.120726 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" (UID: "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.124807 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data" (OuterVolumeSpecName: "config-data") pod "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" (UID: "04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.127429 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-h4c45"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.133785 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.139296 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher2ec5-account-delete-27w96"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.145032 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-2ec5-account-create-update-qxtbc"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.150603 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher2ec5-account-delete-27w96"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.154808 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ch24\" (UniqueName: \"kubernetes.io/projected/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-kube-api-access-7ch24\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.154832 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.154843 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.154854 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.383017 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.390701 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.757487 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.853159 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.868001 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca\") pod \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.868051 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmkz\" (UniqueName: \"kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz\") pod \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.868107 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data\") pod \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.868207 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs\") pod \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.868233 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle\") pod \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\" (UID: \"df3f80bb-8d32-49ed-9ab3-89586fe20cb4\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.880117 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs" (OuterVolumeSpecName: "logs") pod "df3f80bb-8d32-49ed-9ab3-89586fe20cb4" (UID: "df3f80bb-8d32-49ed-9ab3-89586fe20cb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.884180 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz" (OuterVolumeSpecName: "kube-api-access-6tmkz") pod "df3f80bb-8d32-49ed-9ab3-89586fe20cb4" (UID: "df3f80bb-8d32-49ed-9ab3-89586fe20cb4"). InnerVolumeSpecName "kube-api-access-6tmkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.903072 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df3f80bb-8d32-49ed-9ab3-89586fe20cb4" (UID: "df3f80bb-8d32-49ed-9ab3-89586fe20cb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.903216 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "df3f80bb-8d32-49ed-9ab3-89586fe20cb4" (UID: "df3f80bb-8d32-49ed-9ab3-89586fe20cb4"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.919689 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data" (OuterVolumeSpecName: "config-data") pod "df3f80bb-8d32-49ed-9ab3-89586fe20cb4" (UID: "df3f80bb-8d32-49ed-9ab3-89586fe20cb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.971814 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.971911 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.971997 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.972055 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.972120 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.972216 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8mx2\" (UniqueName: \"kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.972258 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.972317 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs\") pod \"48974a36-e692-4aae-911f-d1c55886e393\" (UID: \"48974a36-e692-4aae-911f-d1c55886e393\") " Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.973102 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.973126 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.973140 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.973154 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmkz\" (UniqueName: \"kubernetes.io/projected/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-kube-api-access-6tmkz\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.973170 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3f80bb-8d32-49ed-9ab3-89586fe20cb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.974036 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.974534 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.976958 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2" (OuterVolumeSpecName: "kube-api-access-k8mx2") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "kube-api-access-k8mx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.985805 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts" (OuterVolumeSpecName: "scripts") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:54 crc kubenswrapper[5023]: I0219 08:23:54.997008 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.019133 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.033660 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.055301 5023 generic.go:334] "Generic (PLEG): container finished" podID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" containerID="a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080" exitCode=0 Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.055380 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.055386 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"df3f80bb-8d32-49ed-9ab3-89586fe20cb4","Type":"ContainerDied","Data":"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080"} Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.055778 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"df3f80bb-8d32-49ed-9ab3-89586fe20cb4","Type":"ContainerDied","Data":"695ad57f7355ac622ff3f7a8ad3eef818c6b148103edf6cc9bd00f494fbccdac"} Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.055824 5023 scope.go:117] "RemoveContainer" containerID="a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.060300 5023 generic.go:334] "Generic (PLEG): container finished" podID="48974a36-e692-4aae-911f-d1c55886e393" containerID="df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354" exitCode=0 Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.060334 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerDied","Data":"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354"} Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.060356 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"48974a36-e692-4aae-911f-d1c55886e393","Type":"ContainerDied","Data":"c679210a34e498c4ecf76e61852498070aa6ba44376f4d8236fcab55d2ded5e8"} Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.060376 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.064524 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data" (OuterVolumeSpecName: "config-data") pod "48974a36-e692-4aae-911f-d1c55886e393" (UID: "48974a36-e692-4aae-911f-d1c55886e393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074065 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074096 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8mx2\" (UniqueName: \"kubernetes.io/projected/48974a36-e692-4aae-911f-d1c55886e393-kube-api-access-k8mx2\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074107 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074158 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074166 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074175 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074183 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48974a36-e692-4aae-911f-d1c55886e393-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.074190 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48974a36-e692-4aae-911f-d1c55886e393-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.087567 5023 scope.go:117] "RemoveContainer" containerID="a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.092127 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080\": container with ID starting with a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080 not found: ID does not exist" containerID="a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.092178 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080"} err="failed to get container status \"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080\": rpc error: code = NotFound desc = could not find container \"a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080\": container with ID starting with a5f6ba3c60bc7cddd2aec8b6c3b136031527f3ea3faf542a5fced34006cbb080 not found: ID does not exist" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.092210 5023 scope.go:117] "RemoveContainer" containerID="ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.101470 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.109126 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.111876 5023 scope.go:117] "RemoveContainer" containerID="6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.133975 5023 scope.go:117] "RemoveContainer" containerID="df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.157194 5023 scope.go:117] "RemoveContainer" containerID="6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.174671 5023 scope.go:117] "RemoveContainer" containerID="ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.175190 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8\": container with ID starting with ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8 not found: ID does not exist" containerID="ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.175261 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8"} err="failed to get container status \"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8\": rpc error: code = NotFound desc = could not find container \"ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8\": container with ID starting with ae66fa344739045d44b989efc36757c3fa7bf34a4f1a0a2b88ffdc812d0b25c8 not found: ID does not exist" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.175305 5023 scope.go:117] "RemoveContainer" containerID="6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.175713 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9\": container with ID starting with 6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9 not found: ID does not exist" containerID="6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.175740 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9"} err="failed to get container status \"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9\": rpc error: code = NotFound desc = could not find container \"6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9\": container with ID starting with 6dea8cd641f65625d4fa411037808f9f32508de1ea9d69b7fe8579195da980e9 not found: ID does not exist" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.175755 5023 scope.go:117] "RemoveContainer" containerID="df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.175977 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354\": container with ID starting with df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354 not found: ID does not exist" containerID="df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.176075 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354"} err="failed to get container status \"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354\": rpc error: code = NotFound desc = could not find container \"df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354\": container with ID starting with df80bb992fa5057c5ba0a0f68f1739755c8db2e6da92022745a4cdbc46a04354 not found: ID does not exist" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.176113 5023 scope.go:117] "RemoveContainer" containerID="6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.176490 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552\": container with ID starting with 6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552 not found: ID does not exist" containerID="6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.176516 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552"} err="failed to get container status \"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552\": rpc error: code = NotFound desc = could not find container \"6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552\": container with ID starting with 6d0843728ef870b84c30a7a099e81691bb8f1acd7e8664c74f5fd7bd159ac552 not found: ID does not exist" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.404233 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.412544 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.424918 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425352 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-kuttl-api-log" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425378 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-kuttl-api-log" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425394 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-central-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425402 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-central-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425419 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="sg-core" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425427 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="sg-core" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425444 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-api" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425451 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-api" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425478 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="proxy-httpd" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425485 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="proxy-httpd" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425493 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0232f604-3236-4b62-890c-b39536d0c413" containerName="mariadb-account-delete" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425500 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="0232f604-3236-4b62-890c-b39536d0c413" containerName="mariadb-account-delete" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425513 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" containerName="watcher-decision-engine" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425520 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" containerName="watcher-decision-engine" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425531 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-notification-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425540 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-notification-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: E0219 08:23:55.425559 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerName="watcher-applier" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425566 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerName="watcher-applier" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425753 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="sg-core" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425767 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" containerName="watcher-applier" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425778 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="0232f604-3236-4b62-890c-b39536d0c413" containerName="mariadb-account-delete" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425787 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" containerName="watcher-decision-engine" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425799 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-api" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425810 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="proxy-httpd" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425826 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-central-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425836 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48974a36-e692-4aae-911f-d1c55886e393" containerName="ceilometer-notification-agent" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.425847 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="60402bd7-073d-44dd-9655-59083be6b132" containerName="watcher-kuttl-api-log" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.427458 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.432558 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.432721 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.432810 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.433905 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.485949 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0232f604-3236-4b62-890c-b39536d0c413" path="/var/lib/kubelet/pods/0232f604-3236-4b62-890c-b39536d0c413/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.486740 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2" path="/var/lib/kubelet/pods/04a4f58f-fe8c-4fec-9b67-6ba1d3a00da2/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.487388 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340c7939-dc28-4472-865f-09566ccb8e37" path="/var/lib/kubelet/pods/340c7939-dc28-4472-865f-09566ccb8e37/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.488503 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ebba87-5d6e-4158-a4d7-e5232469601a" path="/var/lib/kubelet/pods/41ebba87-5d6e-4158-a4d7-e5232469601a/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.489186 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48974a36-e692-4aae-911f-d1c55886e393" path="/var/lib/kubelet/pods/48974a36-e692-4aae-911f-d1c55886e393/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.490053 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3f80bb-8d32-49ed-9ab3-89586fe20cb4" path="/var/lib/kubelet/pods/df3f80bb-8d32-49ed-9ab3-89586fe20cb4/volumes" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582183 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582243 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582266 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582296 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582315 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88pc\" (UniqueName: \"kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582347 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582376 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.582434 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.675292 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-005d-account-create-update-646cq"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.676236 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.680184 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683598 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683802 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683842 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683888 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683910 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88pc\" (UniqueName: \"kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683951 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.683990 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.684042 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.684517 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.684574 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.689586 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.689797 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.689830 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-z9wcj"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.690991 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.695981 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.701350 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.711592 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.714745 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88pc\" (UniqueName: \"kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc\") pod \"ceilometer-0\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.723167 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-005d-account-create-update-646cq"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.732682 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z9wcj"] Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.741765 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.785308 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4d9b\" (UniqueName: \"kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.785356 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lx2\" (UniqueName: \"kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.785461 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.785527 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.887410 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.887492 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.887521 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4d9b\" (UniqueName: \"kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.887546 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25lx2\" (UniqueName: \"kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.889454 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.890134 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.910228 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4d9b\" (UniqueName: \"kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b\") pod \"watcher-db-create-z9wcj\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:55 crc kubenswrapper[5023]: I0219 08:23:55.913147 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25lx2\" (UniqueName: \"kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2\") pod \"watcher-005d-account-create-update-646cq\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:56 crc kubenswrapper[5023]: I0219 08:23:56.151665 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:56 crc kubenswrapper[5023]: I0219 08:23:56.163050 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:56 crc kubenswrapper[5023]: I0219 08:23:56.219677 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:23:56 crc kubenswrapper[5023]: I0219 08:23:56.714040 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-005d-account-create-update-646cq"] Feb 19 08:23:56 crc kubenswrapper[5023]: I0219 08:23:56.894889 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z9wcj"] Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.079404 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z9wcj" event={"ID":"10a41a04-662b-45da-98b7-32512a1396d3","Type":"ContainerStarted","Data":"716a6a50608eec82367c518b555474b7f00193a0aa5600f541aa7e4945b0f090"} Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.081027 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" event={"ID":"ed7ace87-2a96-4d9d-bffd-ae72e694b353","Type":"ContainerStarted","Data":"1cf851f02571033dc5d4ea5899be72d10c06d98f0fc873694134a7db5400e1f6"} Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.081053 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" event={"ID":"ed7ace87-2a96-4d9d-bffd-ae72e694b353","Type":"ContainerStarted","Data":"80807cefd5873f4faac53ad7a8340a0130279e20c4d4b3a36b29f77669ff30ce"} Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.083217 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerStarted","Data":"57f4441cb40f3e0750529e9e41089d4fb82588d7fec4d81c750e3069b67db826"} Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.083254 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerStarted","Data":"72a560261e98bcf1e5d8eaeeeb36d0df0b9745dfc6b4ff5b421886a5976c2e5f"} Feb 19 08:23:57 crc kubenswrapper[5023]: I0219 08:23:57.109332 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" podStartSLOduration=2.109318656 podStartE2EDuration="2.109318656s" podCreationTimestamp="2026-02-19 08:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:23:57.103766289 +0000 UTC m=+1394.760885237" watchObservedRunningTime="2026-02-19 08:23:57.109318656 +0000 UTC m=+1394.766437604" Feb 19 08:23:58 crc kubenswrapper[5023]: I0219 08:23:58.092630 5023 generic.go:334] "Generic (PLEG): container finished" podID="ed7ace87-2a96-4d9d-bffd-ae72e694b353" containerID="1cf851f02571033dc5d4ea5899be72d10c06d98f0fc873694134a7db5400e1f6" exitCode=0 Feb 19 08:23:58 crc kubenswrapper[5023]: I0219 08:23:58.092752 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" event={"ID":"ed7ace87-2a96-4d9d-bffd-ae72e694b353","Type":"ContainerDied","Data":"1cf851f02571033dc5d4ea5899be72d10c06d98f0fc873694134a7db5400e1f6"} Feb 19 08:23:58 crc kubenswrapper[5023]: I0219 08:23:58.095777 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerStarted","Data":"1f7ed8742b380968e3f3fbe635ba52c028613b72d54ce3552397945f05007bf5"} Feb 19 08:23:58 crc kubenswrapper[5023]: I0219 08:23:58.097533 5023 generic.go:334] "Generic (PLEG): container finished" podID="10a41a04-662b-45da-98b7-32512a1396d3" containerID="c73edfc2b2711e55eafc83c17bc03f63cf8ae448159b7f97c999403af864b6c0" exitCode=0 Feb 19 08:23:58 crc kubenswrapper[5023]: I0219 08:23:58.097565 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z9wcj" event={"ID":"10a41a04-662b-45da-98b7-32512a1396d3","Type":"ContainerDied","Data":"c73edfc2b2711e55eafc83c17bc03f63cf8ae448159b7f97c999403af864b6c0"} Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.108116 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerStarted","Data":"12cd2d250ad880a551c6cc452dbf809b6e847f9d7185a68643b206be268e4254"} Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.500426 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.603371 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.624161 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25lx2\" (UniqueName: \"kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2\") pod \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.624341 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts\") pod \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\" (UID: \"ed7ace87-2a96-4d9d-bffd-ae72e694b353\") " Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.625085 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed7ace87-2a96-4d9d-bffd-ae72e694b353" (UID: "ed7ace87-2a96-4d9d-bffd-ae72e694b353"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.635965 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2" (OuterVolumeSpecName: "kube-api-access-25lx2") pod "ed7ace87-2a96-4d9d-bffd-ae72e694b353" (UID: "ed7ace87-2a96-4d9d-bffd-ae72e694b353"). InnerVolumeSpecName "kube-api-access-25lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.725577 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4d9b\" (UniqueName: \"kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b\") pod \"10a41a04-662b-45da-98b7-32512a1396d3\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.725857 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts\") pod \"10a41a04-662b-45da-98b7-32512a1396d3\" (UID: \"10a41a04-662b-45da-98b7-32512a1396d3\") " Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.726485 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10a41a04-662b-45da-98b7-32512a1396d3" (UID: "10a41a04-662b-45da-98b7-32512a1396d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.726748 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed7ace87-2a96-4d9d-bffd-ae72e694b353-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.726767 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a41a04-662b-45da-98b7-32512a1396d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.726776 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25lx2\" (UniqueName: \"kubernetes.io/projected/ed7ace87-2a96-4d9d-bffd-ae72e694b353-kube-api-access-25lx2\") on node \"crc\" DevicePath \"\"" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.729674 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b" (OuterVolumeSpecName: "kube-api-access-j4d9b") pod "10a41a04-662b-45da-98b7-32512a1396d3" (UID: "10a41a04-662b-45da-98b7-32512a1396d3"). InnerVolumeSpecName "kube-api-access-j4d9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:23:59 crc kubenswrapper[5023]: I0219 08:23:59.828386 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4d9b\" (UniqueName: \"kubernetes.io/projected/10a41a04-662b-45da-98b7-32512a1396d3-kube-api-access-j4d9b\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.119042 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerStarted","Data":"519d44fc76e53f106aa162bcf40e2a4f89f674458c450045e57c8e08da263fe1"} Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.120715 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.122680 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z9wcj" event={"ID":"10a41a04-662b-45da-98b7-32512a1396d3","Type":"ContainerDied","Data":"716a6a50608eec82367c518b555474b7f00193a0aa5600f541aa7e4945b0f090"} Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.122709 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716a6a50608eec82367c518b555474b7f00193a0aa5600f541aa7e4945b0f090" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.122757 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z9wcj" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.124481 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" event={"ID":"ed7ace87-2a96-4d9d-bffd-ae72e694b353","Type":"ContainerDied","Data":"80807cefd5873f4faac53ad7a8340a0130279e20c4d4b3a36b29f77669ff30ce"} Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.124534 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80807cefd5873f4faac53ad7a8340a0130279e20c4d4b3a36b29f77669ff30ce" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.124582 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-005d-account-create-update-646cq" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.157994 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.511382884 podStartE2EDuration="5.157972123s" podCreationTimestamp="2026-02-19 08:23:55 +0000 UTC" firstStartedPulling="2026-02-19 08:23:56.238006762 +0000 UTC m=+1393.895125720" lastFinishedPulling="2026-02-19 08:23:59.884596011 +0000 UTC m=+1397.541714959" observedRunningTime="2026-02-19 08:24:00.14880787 +0000 UTC m=+1397.805926818" watchObservedRunningTime="2026-02-19 08:24:00.157972123 +0000 UTC m=+1397.815091071" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.968800 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm"] Feb 19 08:24:00 crc kubenswrapper[5023]: E0219 08:24:00.969598 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a41a04-662b-45da-98b7-32512a1396d3" containerName="mariadb-database-create" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.972888 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a41a04-662b-45da-98b7-32512a1396d3" containerName="mariadb-database-create" Feb 19 08:24:00 crc kubenswrapper[5023]: E0219 08:24:00.972974 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7ace87-2a96-4d9d-bffd-ae72e694b353" containerName="mariadb-account-create-update" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.972984 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7ace87-2a96-4d9d-bffd-ae72e694b353" containerName="mariadb-account-create-update" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.973469 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a41a04-662b-45da-98b7-32512a1396d3" containerName="mariadb-database-create" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.973503 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7ace87-2a96-4d9d-bffd-ae72e694b353" containerName="mariadb-account-create-update" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.975195 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.980789 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-5rlxl" Feb 19 08:24:00 crc kubenswrapper[5023]: I0219 08:24:00.981315 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.001696 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm"] Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.162092 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.162274 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.162319 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqcp\" (UniqueName: \"kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.162353 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.263598 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.263709 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqcp\" (UniqueName: \"kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.263755 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.263818 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.268346 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.268705 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.276419 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.292505 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqcp\" (UniqueName: \"kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp\") pod \"watcher-kuttl-db-sync-tt9nm\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.304740 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:01 crc kubenswrapper[5023]: I0219 08:24:01.774164 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm"] Feb 19 08:24:01 crc kubenswrapper[5023]: W0219 08:24:01.775528 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a04451e_8bde_4e66_9ec5_d7308f0bdbe2.slice/crio-7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d WatchSource:0}: Error finding container 7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d: Status 404 returned error can't find the container with id 7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d Feb 19 08:24:02 crc kubenswrapper[5023]: I0219 08:24:02.144771 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" event={"ID":"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2","Type":"ContainerStarted","Data":"aa2bb14052a3c75a2cc5eecc10d084e351ff76c1e65278fb9608315662e5acfa"} Feb 19 08:24:02 crc kubenswrapper[5023]: I0219 08:24:02.145051 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" event={"ID":"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2","Type":"ContainerStarted","Data":"7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d"} Feb 19 08:24:05 crc kubenswrapper[5023]: I0219 08:24:05.166169 5023 generic.go:334] "Generic (PLEG): container finished" podID="8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" containerID="aa2bb14052a3c75a2cc5eecc10d084e351ff76c1e65278fb9608315662e5acfa" exitCode=0 Feb 19 08:24:05 crc kubenswrapper[5023]: I0219 08:24:05.166262 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" event={"ID":"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2","Type":"ContainerDied","Data":"aa2bb14052a3c75a2cc5eecc10d084e351ff76c1e65278fb9608315662e5acfa"} Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.476045 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.648973 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data\") pod \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.649070 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kqcp\" (UniqueName: \"kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp\") pod \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.649216 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data\") pod \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.649253 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle\") pod \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\" (UID: \"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2\") " Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.654363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" (UID: "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.660830 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp" (OuterVolumeSpecName: "kube-api-access-6kqcp") pod "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" (UID: "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2"). InnerVolumeSpecName "kube-api-access-6kqcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.676428 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" (UID: "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.696352 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data" (OuterVolumeSpecName: "config-data") pod "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" (UID: "8a04451e-8bde-4e66-9ec5-d7308f0bdbe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.750830 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.750872 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.750883 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:06 crc kubenswrapper[5023]: I0219 08:24:06.750892 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kqcp\" (UniqueName: \"kubernetes.io/projected/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2-kube-api-access-6kqcp\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.181940 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" event={"ID":"8a04451e-8bde-4e66-9ec5-d7308f0bdbe2","Type":"ContainerDied","Data":"7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d"} Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.181985 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd0166f5950724d9690e69abb6ac00612975c5f4dd9fa2b66079473f06bd73d" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.181989 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.423556 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: E0219 08:24:07.423984 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" containerName="watcher-kuttl-db-sync" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.424004 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" containerName="watcher-kuttl-db-sync" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.424217 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" containerName="watcher-kuttl-db-sync" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.425256 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.427649 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.427871 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-5rlxl" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.428032 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.428094 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.441288 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.506988 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.517795 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.522122 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.524750 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565077 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565129 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565211 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565357 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565385 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565438 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rbs\" (UniqueName: \"kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.565584 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.602721 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.604027 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.607042 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.613688 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667114 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667217 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667249 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667412 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667477 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667542 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667588 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667637 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rbs\" (UniqueName: \"kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667706 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6k8b\" (UniqueName: \"kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667854 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667881 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.667892 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.671387 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.671419 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.671535 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.672131 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.673524 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.685259 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rbs\" (UniqueName: \"kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs\") pod \"watcher-kuttl-api-0\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.742057 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.769950 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770006 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770058 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8br9\" (UniqueName: \"kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770079 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770121 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770150 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770173 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770202 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770228 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6k8b\" (UniqueName: \"kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.770889 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.773807 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.774165 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.775300 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.788164 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6k8b\" (UniqueName: \"kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.838121 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.875962 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.876082 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.876163 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8br9\" (UniqueName: \"kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.876230 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.877049 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.884851 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.887856 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.913248 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8br9\" (UniqueName: \"kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9\") pod \"watcher-kuttl-applier-0\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:07 crc kubenswrapper[5023]: I0219 08:24:07.929812 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:08 crc kubenswrapper[5023]: I0219 08:24:08.187264 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:08 crc kubenswrapper[5023]: I0219 08:24:08.305429 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:08 crc kubenswrapper[5023]: W0219 08:24:08.310776 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba01942f_a3f7_4f5e_8793_b2f5f24ebb90.slice/crio-f772e431b72635946b7e12e15bd5050121ae8f3e9742780e06cbb0e4fe87a069 WatchSource:0}: Error finding container f772e431b72635946b7e12e15bd5050121ae8f3e9742780e06cbb0e4fe87a069: Status 404 returned error can't find the container with id f772e431b72635946b7e12e15bd5050121ae8f3e9742780e06cbb0e4fe87a069 Feb 19 08:24:08 crc kubenswrapper[5023]: I0219 08:24:08.404644 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.201327 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e37c6ebe-c291-42ce-b082-67c5e054010d","Type":"ContainerStarted","Data":"6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.201667 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e37c6ebe-c291-42ce-b082-67c5e054010d","Type":"ContainerStarted","Data":"66632edc7f21a8861e64c2def06c43c2b48b13a1eee2069a6c7009ea814ed4e2"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.202939 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90","Type":"ContainerStarted","Data":"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.202982 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90","Type":"ContainerStarted","Data":"f772e431b72635946b7e12e15bd5050121ae8f3e9742780e06cbb0e4fe87a069"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.206294 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerStarted","Data":"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.206335 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerStarted","Data":"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.206351 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerStarted","Data":"28d8391c04244988bddb232278c0e44f9b8e3f07ac625f2e5a2e4589961a74c1"} Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.207338 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.237070 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.237052671 podStartE2EDuration="2.237052671s" podCreationTimestamp="2026-02-19 08:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:09.232407718 +0000 UTC m=+1406.889526666" watchObservedRunningTime="2026-02-19 08:24:09.237052671 +0000 UTC m=+1406.894171619" Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.262937 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.262913417 podStartE2EDuration="2.262913417s" podCreationTimestamp="2026-02-19 08:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:09.254432652 +0000 UTC m=+1406.911551610" watchObservedRunningTime="2026-02-19 08:24:09.262913417 +0000 UTC m=+1406.920032365" Feb 19 08:24:09 crc kubenswrapper[5023]: I0219 08:24:09.289515 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.289496112 podStartE2EDuration="2.289496112s" podCreationTimestamp="2026-02-19 08:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:09.282869616 +0000 UTC m=+1406.939988574" watchObservedRunningTime="2026-02-19 08:24:09.289496112 +0000 UTC m=+1406.946615060" Feb 19 08:24:11 crc kubenswrapper[5023]: I0219 08:24:11.240792 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:24:11 crc kubenswrapper[5023]: I0219 08:24:11.869850 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:24:11 crc kubenswrapper[5023]: I0219 08:24:11.870207 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:24:12 crc kubenswrapper[5023]: I0219 08:24:12.126257 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:12 crc kubenswrapper[5023]: I0219 08:24:12.743143 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:12 crc kubenswrapper[5023]: I0219 08:24:12.931069 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.742950 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.754727 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.839213 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.883401 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.931140 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:17 crc kubenswrapper[5023]: I0219 08:24:17.963540 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:18 crc kubenswrapper[5023]: I0219 08:24:18.299165 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:18 crc kubenswrapper[5023]: I0219 08:24:18.317778 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:18 crc kubenswrapper[5023]: I0219 08:24:18.327251 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:18 crc kubenswrapper[5023]: I0219 08:24:18.346482 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.481637 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.482244 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-central-agent" containerID="cri-o://57f4441cb40f3e0750529e9e41089d4fb82588d7fec4d81c750e3069b67db826" gracePeriod=30 Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.482650 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="sg-core" containerID="cri-o://12cd2d250ad880a551c6cc452dbf809b6e847f9d7185a68643b206be268e4254" gracePeriod=30 Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.482665 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="proxy-httpd" containerID="cri-o://519d44fc76e53f106aa162bcf40e2a4f89f674458c450045e57c8e08da263fe1" gracePeriod=30 Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.482704 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-notification-agent" containerID="cri-o://1f7ed8742b380968e3f3fbe635ba52c028613b72d54ce3552397945f05007bf5" gracePeriod=30 Feb 19 08:24:20 crc kubenswrapper[5023]: I0219 08:24:20.504915 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.162:3000/\": EOF" Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334352 5023 generic.go:334] "Generic (PLEG): container finished" podID="4dbda052-3646-4e81-96b9-ce6549f16457" containerID="519d44fc76e53f106aa162bcf40e2a4f89f674458c450045e57c8e08da263fe1" exitCode=0 Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334840 5023 generic.go:334] "Generic (PLEG): container finished" podID="4dbda052-3646-4e81-96b9-ce6549f16457" containerID="12cd2d250ad880a551c6cc452dbf809b6e847f9d7185a68643b206be268e4254" exitCode=2 Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334853 5023 generic.go:334] "Generic (PLEG): container finished" podID="4dbda052-3646-4e81-96b9-ce6549f16457" containerID="57f4441cb40f3e0750529e9e41089d4fb82588d7fec4d81c750e3069b67db826" exitCode=0 Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334405 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerDied","Data":"519d44fc76e53f106aa162bcf40e2a4f89f674458c450045e57c8e08da263fe1"} Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334894 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerDied","Data":"12cd2d250ad880a551c6cc452dbf809b6e847f9d7185a68643b206be268e4254"} Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.334914 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerDied","Data":"57f4441cb40f3e0750529e9e41089d4fb82588d7fec4d81c750e3069b67db826"} Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.886214 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.887090 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-kuttl-api-log" containerID="cri-o://f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811" gracePeriod=30 Feb 19 08:24:21 crc kubenswrapper[5023]: I0219 08:24:21.887371 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-api" containerID="cri-o://d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8" gracePeriod=30 Feb 19 08:24:22 crc kubenswrapper[5023]: I0219 08:24:22.343494 5023 generic.go:334] "Generic (PLEG): container finished" podID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerID="f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811" exitCode=143 Feb 19 08:24:22 crc kubenswrapper[5023]: I0219 08:24:22.343541 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerDied","Data":"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811"} Feb 19 08:24:23 crc kubenswrapper[5023]: I0219 08:24:23.524418 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.166:9322/\": read tcp 10.217.0.2:32802->10.217.0.166:9322: read: connection reset by peer" Feb 19 08:24:23 crc kubenswrapper[5023]: I0219 08:24:23.524511 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9322/\": read tcp 10.217.0.2:32818->10.217.0.166:9322: read: connection reset by peer" Feb 19 08:24:23 crc kubenswrapper[5023]: I0219 08:24:23.947702 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.031753 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.031872 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.031944 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.032040 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rbs\" (UniqueName: \"kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.032071 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.032134 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.032204 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs\") pod \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\" (UID: \"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.033033 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs" (OuterVolumeSpecName: "logs") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.037924 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs" (OuterVolumeSpecName: "kube-api-access-g4rbs") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "kube-api-access-g4rbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.065124 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.074606 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.098059 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.100249 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data" (OuterVolumeSpecName: "config-data") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.104308 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" (UID: "7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135419 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135487 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135507 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135519 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135533 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rbs\" (UniqueName: \"kubernetes.io/projected/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-kube-api-access-g4rbs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135548 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.135559 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.367600 5023 generic.go:334] "Generic (PLEG): container finished" podID="4dbda052-3646-4e81-96b9-ce6549f16457" containerID="1f7ed8742b380968e3f3fbe635ba52c028613b72d54ce3552397945f05007bf5" exitCode=0 Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.367676 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerDied","Data":"1f7ed8742b380968e3f3fbe635ba52c028613b72d54ce3552397945f05007bf5"} Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.369825 5023 generic.go:334] "Generic (PLEG): container finished" podID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerID="d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8" exitCode=0 Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.369863 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.369887 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerDied","Data":"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8"} Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.369918 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad","Type":"ContainerDied","Data":"28d8391c04244988bddb232278c0e44f9b8e3f07ac625f2e5a2e4589961a74c1"} Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.369938 5023 scope.go:117] "RemoveContainer" containerID="d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.405574 5023 scope.go:117] "RemoveContainer" containerID="f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.410398 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.427166 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.461790 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:24 crc kubenswrapper[5023]: E0219 08:24:24.462421 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-api" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.462439 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-api" Feb 19 08:24:24 crc kubenswrapper[5023]: E0219 08:24:24.462467 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-kuttl-api-log" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.462477 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-kuttl-api-log" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.462754 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-kuttl-api-log" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.462773 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" containerName="watcher-api" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.464226 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.464232 5023 scope.go:117] "RemoveContainer" containerID="d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8" Feb 19 08:24:24 crc kubenswrapper[5023]: E0219 08:24:24.469151 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8\": container with ID starting with d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8 not found: ID does not exist" containerID="d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.469216 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8"} err="failed to get container status \"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8\": rpc error: code = NotFound desc = could not find container \"d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8\": container with ID starting with d37a64f43fbe7335b492fa5ec66899a67bed6490069fdb510f46a398c593ffd8 not found: ID does not exist" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.469246 5023 scope.go:117] "RemoveContainer" containerID="f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.469707 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.469728 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.470013 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.473868 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:24 crc kubenswrapper[5023]: E0219 08:24:24.474714 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811\": container with ID starting with f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811 not found: ID does not exist" containerID="f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.474753 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811"} err="failed to get container status \"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811\": rpc error: code = NotFound desc = could not find container \"f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811\": container with ID starting with f4428984da4fb29f09ab41188534f1b5af94afac4a3132811dabf8bbafbdd811 not found: ID does not exist" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.553816 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554109 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554199 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554235 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554261 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554453 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.554512 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xcs\" (UniqueName: \"kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657049 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657118 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657175 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657231 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657267 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xcs\" (UniqueName: \"kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.657520 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.658032 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.666730 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.666754 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.666776 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.667200 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.667298 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.677006 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xcs\" (UniqueName: \"kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs\") pod \"watcher-kuttl-api-0\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.774193 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.784733 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861092 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861169 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861224 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88pc\" (UniqueName: \"kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861245 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861292 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861340 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861384 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.861422 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml\") pod \"4dbda052-3646-4e81-96b9-ce6549f16457\" (UID: \"4dbda052-3646-4e81-96b9-ce6549f16457\") " Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.862445 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.862664 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.866465 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc" (OuterVolumeSpecName: "kube-api-access-v88pc") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "kube-api-access-v88pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.867849 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts" (OuterVolumeSpecName: "scripts") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.893760 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.923559 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.950366 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964604 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964657 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964671 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964682 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964691 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964702 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88pc\" (UniqueName: \"kubernetes.io/projected/4dbda052-3646-4e81-96b9-ce6549f16457-kube-api-access-v88pc\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.964714 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbda052-3646-4e81-96b9-ce6549f16457-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:24 crc kubenswrapper[5023]: I0219 08:24:24.967643 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data" (OuterVolumeSpecName: "config-data") pod "4dbda052-3646-4e81-96b9-ce6549f16457" (UID: "4dbda052-3646-4e81-96b9-ce6549f16457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.072127 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbda052-3646-4e81-96b9-ce6549f16457-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.249131 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.390460 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4dbda052-3646-4e81-96b9-ce6549f16457","Type":"ContainerDied","Data":"72a560261e98bcf1e5d8eaeeeb36d0df0b9745dfc6b4ff5b421886a5976c2e5f"} Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.390828 5023 scope.go:117] "RemoveContainer" containerID="519d44fc76e53f106aa162bcf40e2a4f89f674458c450045e57c8e08da263fe1" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.390496 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.401682 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerStarted","Data":"8f99d4eb799caee6826af89618ce8c480230e9be9424f7e9d2c9bbe31ae0ddee"} Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.418631 5023 scope.go:117] "RemoveContainer" containerID="12cd2d250ad880a551c6cc452dbf809b6e847f9d7185a68643b206be268e4254" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.459054 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.470730 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.485611 5023 scope.go:117] "RemoveContainer" containerID="1f7ed8742b380968e3f3fbe635ba52c028613b72d54ce3552397945f05007bf5" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.491936 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" path="/var/lib/kubelet/pods/4dbda052-3646-4e81-96b9-ce6549f16457/volumes" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.492837 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad" path="/var/lib/kubelet/pods/7d7aa345-dc5d-4301-9c8d-d8fd0beb7fad/volumes" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.501551 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:25 crc kubenswrapper[5023]: E0219 08:24:25.502544 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-notification-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.502574 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-notification-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: E0219 08:24:25.502652 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="proxy-httpd" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.502667 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="proxy-httpd" Feb 19 08:24:25 crc kubenswrapper[5023]: E0219 08:24:25.502678 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-central-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.502686 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-central-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: E0219 08:24:25.502721 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="sg-core" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.502731 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="sg-core" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.503130 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="sg-core" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.503157 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-notification-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.503203 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="proxy-httpd" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.503228 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbda052-3646-4e81-96b9-ce6549f16457" containerName="ceilometer-central-agent" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.518124 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.518884 5023 scope.go:117] "RemoveContainer" containerID="57f4441cb40f3e0750529e9e41089d4fb82588d7fec4d81c750e3069b67db826" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.522557 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.522820 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.523008 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.525859 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710463 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dc72\" (UniqueName: \"kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710643 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710688 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710730 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710763 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710782 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710807 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.710841 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.811866 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.811926 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.811957 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.811973 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.812018 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dc72\" (UniqueName: \"kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.812064 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.812116 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.812166 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.813096 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.816370 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.832773 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.834201 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.836582 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.839339 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.847827 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.851227 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dc72\" (UniqueName: \"kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72\") pod \"ceilometer-0\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:25 crc kubenswrapper[5023]: I0219 08:24:25.864053 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.371253 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:26 crc kubenswrapper[5023]: W0219 08:24:26.373949 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e17256e_44e1_4a1a_becc_1df13cf2b66a.slice/crio-9c531ab6c0945f545308bff6b1cb0c0205f5c38da61511d862955603dbfb9b41 WatchSource:0}: Error finding container 9c531ab6c0945f545308bff6b1cb0c0205f5c38da61511d862955603dbfb9b41: Status 404 returned error can't find the container with id 9c531ab6c0945f545308bff6b1cb0c0205f5c38da61511d862955603dbfb9b41 Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.416013 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerStarted","Data":"29ddf3f10bb6d19a51b5d8c932a39e0ea6baa8e5f9efe87878e55a25858b7b41"} Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.416470 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.416609 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerStarted","Data":"6915c682a9c7fde4b8c71c38b7f9f3594b105714dc4264c0ee0115667a67a4b6"} Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.417463 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerStarted","Data":"9c531ab6c0945f545308bff6b1cb0c0205f5c38da61511d862955603dbfb9b41"} Feb 19 08:24:26 crc kubenswrapper[5023]: I0219 08:24:26.441984 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.441958067 podStartE2EDuration="2.441958067s" podCreationTimestamp="2026-02-19 08:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:26.437068497 +0000 UTC m=+1424.094187435" watchObservedRunningTime="2026-02-19 08:24:26.441958067 +0000 UTC m=+1424.099077035" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.428935 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerStarted","Data":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.602871 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.609419 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tt9nm"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.675913 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.676184 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerName="watcher-applier" containerID="cri-o://6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" gracePeriod=30 Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.689956 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.690176 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" containerName="watcher-decision-engine" containerID="cri-o://b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede" gracePeriod=30 Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.726202 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher005d-account-delete-ct7lq"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.727343 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.759251 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher005d-account-delete-ct7lq"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.800908 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.851042 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.852529 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqpb\" (UniqueName: \"kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: E0219 08:24:27.934115 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:24:27 crc kubenswrapper[5023]: E0219 08:24:27.938098 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:24:27 crc kubenswrapper[5023]: E0219 08:24:27.946116 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:24:27 crc kubenswrapper[5023]: E0219 08:24:27.946188 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerName="watcher-applier" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.955207 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqpb\" (UniqueName: \"kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.955688 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.957022 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:27 crc kubenswrapper[5023]: I0219 08:24:27.977831 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqpb\" (UniqueName: \"kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb\") pod \"watcher005d-account-delete-ct7lq\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.055607 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.471515 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.472194 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-kuttl-api-log" containerID="cri-o://6915c682a9c7fde4b8c71c38b7f9f3594b105714dc4264c0ee0115667a67a4b6" gracePeriod=30 Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.472492 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerStarted","Data":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.472769 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" containerID="cri-o://29ddf3f10bb6d19a51b5d8c932a39e0ea6baa8e5f9efe87878e55a25858b7b41" gracePeriod=30 Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.482055 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": EOF" Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.488431 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": EOF" Feb 19 08:24:28 crc kubenswrapper[5023]: I0219 08:24:28.674924 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher005d-account-delete-ct7lq"] Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.488302 5023 generic.go:334] "Generic (PLEG): container finished" podID="c7e60574-af41-4bda-9968-9eccb150f161" containerID="6915c682a9c7fde4b8c71c38b7f9f3594b105714dc4264c0ee0115667a67a4b6" exitCode=143 Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.489212 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a04451e-8bde-4e66-9ec5-d7308f0bdbe2" path="/var/lib/kubelet/pods/8a04451e-8bde-4e66-9ec5-d7308f0bdbe2/volumes" Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.489873 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" event={"ID":"f8d75eda-b51a-40fe-9239-745e16bf8614","Type":"ContainerStarted","Data":"488c24ed5b3be5df8a02d168d439d1e71d7a53b26e50cf88ec6103ba918c8021"} Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.489912 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" event={"ID":"f8d75eda-b51a-40fe-9239-745e16bf8614","Type":"ContainerStarted","Data":"af68effee10c85743deb663721be70bdbd80e1c13949534a8e13de1745fc80be"} Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.489930 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerDied","Data":"6915c682a9c7fde4b8c71c38b7f9f3594b105714dc4264c0ee0115667a67a4b6"} Feb 19 08:24:29 crc kubenswrapper[5023]: I0219 08:24:29.785258 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.227911 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": read tcp 10.217.0.2:57490->10.217.0.169:9322: read: connection reset by peer" Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.508053 5023 generic.go:334] "Generic (PLEG): container finished" podID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerID="6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" exitCode=0 Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.508312 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e37c6ebe-c291-42ce-b082-67c5e054010d","Type":"ContainerDied","Data":"6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864"} Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.518793 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerStarted","Data":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.520554 5023 generic.go:334] "Generic (PLEG): container finished" podID="f8d75eda-b51a-40fe-9239-745e16bf8614" containerID="488c24ed5b3be5df8a02d168d439d1e71d7a53b26e50cf88ec6103ba918c8021" exitCode=0 Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.520676 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" event={"ID":"f8d75eda-b51a-40fe-9239-745e16bf8614","Type":"ContainerDied","Data":"488c24ed5b3be5df8a02d168d439d1e71d7a53b26e50cf88ec6103ba918c8021"} Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.522557 5023 generic.go:334] "Generic (PLEG): container finished" podID="c7e60574-af41-4bda-9968-9eccb150f161" containerID="29ddf3f10bb6d19a51b5d8c932a39e0ea6baa8e5f9efe87878e55a25858b7b41" exitCode=0 Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.522606 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerDied","Data":"29ddf3f10bb6d19a51b5d8c932a39e0ea6baa8e5f9efe87878e55a25858b7b41"} Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.878663 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:30 crc kubenswrapper[5023]: I0219 08:24:30.943010 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016176 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016526 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016570 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8br9\" (UniqueName: \"kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9\") pod \"e37c6ebe-c291-42ce-b082-67c5e054010d\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016588 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016609 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8xcs\" (UniqueName: \"kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016699 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016728 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016751 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs\") pod \"e37c6ebe-c291-42ce-b082-67c5e054010d\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016781 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs\") pod \"c7e60574-af41-4bda-9968-9eccb150f161\" (UID: \"c7e60574-af41-4bda-9968-9eccb150f161\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016806 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle\") pod \"e37c6ebe-c291-42ce-b082-67c5e054010d\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.016823 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data\") pod \"e37c6ebe-c291-42ce-b082-67c5e054010d\" (UID: \"e37c6ebe-c291-42ce-b082-67c5e054010d\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.018802 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs" (OuterVolumeSpecName: "logs") pod "e37c6ebe-c291-42ce-b082-67c5e054010d" (UID: "e37c6ebe-c291-42ce-b082-67c5e054010d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.026566 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs" (OuterVolumeSpecName: "logs") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.049127 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs" (OuterVolumeSpecName: "kube-api-access-l8xcs") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "kube-api-access-l8xcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.051758 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9" (OuterVolumeSpecName: "kube-api-access-p8br9") pod "e37c6ebe-c291-42ce-b082-67c5e054010d" (UID: "e37c6ebe-c291-42ce-b082-67c5e054010d"). InnerVolumeSpecName "kube-api-access-p8br9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.074847 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37c6ebe-c291-42ce-b082-67c5e054010d" (UID: "e37c6ebe-c291-42ce-b082-67c5e054010d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.113634 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.116841 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.120871 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e60574-af41-4bda-9968-9eccb150f161-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121071 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8br9\" (UniqueName: \"kubernetes.io/projected/e37c6ebe-c291-42ce-b082-67c5e054010d-kube-api-access-p8br9\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121136 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8xcs\" (UniqueName: \"kubernetes.io/projected/c7e60574-af41-4bda-9968-9eccb150f161-kube-api-access-l8xcs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121212 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121305 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37c6ebe-c291-42ce-b082-67c5e054010d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121372 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.121431 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.140135 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.155501 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data" (OuterVolumeSpecName: "config-data") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.171158 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data" (OuterVolumeSpecName: "config-data") pod "e37c6ebe-c291-42ce-b082-67c5e054010d" (UID: "e37c6ebe-c291-42ce-b082-67c5e054010d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.191499 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7e60574-af41-4bda-9968-9eccb150f161" (UID: "c7e60574-af41-4bda-9968-9eccb150f161"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.224032 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.224081 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37c6ebe-c291-42ce-b082-67c5e054010d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.224096 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.224112 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7e60574-af41-4bda-9968-9eccb150f161-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.240221 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.327189 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs\") pod \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.327345 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca\") pod \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.327393 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6k8b\" (UniqueName: \"kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b\") pod \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.327501 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data\") pod \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.327555 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle\") pod \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\" (UID: \"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90\") " Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.328989 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs" (OuterVolumeSpecName: "logs") pod "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" (UID: "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.335881 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b" (OuterVolumeSpecName: "kube-api-access-p6k8b") pod "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" (UID: "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90"). InnerVolumeSpecName "kube-api-access-p6k8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.357968 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" (UID: "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.360002 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" (UID: "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.400813 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data" (OuterVolumeSpecName: "config-data") pod "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" (UID: "ba01942f-a3f7-4f5e-8793-b2f5f24ebb90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.429985 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.430031 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.430048 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.430057 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.430069 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6k8b\" (UniqueName: \"kubernetes.io/projected/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90-kube-api-access-p6k8b\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.539739 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.540465 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c7e60574-af41-4bda-9968-9eccb150f161","Type":"ContainerDied","Data":"8f99d4eb799caee6826af89618ce8c480230e9be9424f7e9d2c9bbe31ae0ddee"} Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.540647 5023 scope.go:117] "RemoveContainer" containerID="29ddf3f10bb6d19a51b5d8c932a39e0ea6baa8e5f9efe87878e55a25858b7b41" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.545945 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.545497 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e37c6ebe-c291-42ce-b082-67c5e054010d","Type":"ContainerDied","Data":"66632edc7f21a8861e64c2def06c43c2b48b13a1eee2069a6c7009ea814ed4e2"} Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.553043 5023 generic.go:334] "Generic (PLEG): container finished" podID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" containerID="b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede" exitCode=0 Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.553289 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90","Type":"ContainerDied","Data":"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede"} Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.553396 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba01942f-a3f7-4f5e-8793-b2f5f24ebb90","Type":"ContainerDied","Data":"f772e431b72635946b7e12e15bd5050121ae8f3e9742780e06cbb0e4fe87a069"} Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.553961 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.583750 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerStarted","Data":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.584217 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.598890 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.611372 5023 scope.go:117] "RemoveContainer" containerID="6915c682a9c7fde4b8c71c38b7f9f3594b105714dc4264c0ee0115667a67a4b6" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.619419 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.636881 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.645512 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.656859 5023 scope.go:117] "RemoveContainer" containerID="6dda4083256bd1143e99c6d439910e5fd0e1cc5dc29fda4c5cbe01af353a3864" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.657051 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.673220 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.679456 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.870534066 podStartE2EDuration="6.679437139s" podCreationTimestamp="2026-02-19 08:24:25 +0000 UTC" firstStartedPulling="2026-02-19 08:24:26.378436722 +0000 UTC m=+1424.035555670" lastFinishedPulling="2026-02-19 08:24:31.187339795 +0000 UTC m=+1428.844458743" observedRunningTime="2026-02-19 08:24:31.665020167 +0000 UTC m=+1429.322139115" watchObservedRunningTime="2026-02-19 08:24:31.679437139 +0000 UTC m=+1429.336556087" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.707248 5023 scope.go:117] "RemoveContainer" containerID="b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.731605 5023 scope.go:117] "RemoveContainer" containerID="b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede" Feb 19 08:24:31 crc kubenswrapper[5023]: E0219 08:24:31.732352 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede\": container with ID starting with b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede not found: ID does not exist" containerID="b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede" Feb 19 08:24:31 crc kubenswrapper[5023]: I0219 08:24:31.732404 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede"} err="failed to get container status \"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede\": rpc error: code = NotFound desc = could not find container \"b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede\": container with ID starting with b1b5589d3853cf1a68dc9f8c0831b4c9b45cfb12aef07feb9f2a2cdf5058dede not found: ID does not exist" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.094165 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.147585 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vqpb\" (UniqueName: \"kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb\") pod \"f8d75eda-b51a-40fe-9239-745e16bf8614\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.147716 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts\") pod \"f8d75eda-b51a-40fe-9239-745e16bf8614\" (UID: \"f8d75eda-b51a-40fe-9239-745e16bf8614\") " Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.148795 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8d75eda-b51a-40fe-9239-745e16bf8614" (UID: "f8d75eda-b51a-40fe-9239-745e16bf8614"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.163816 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb" (OuterVolumeSpecName: "kube-api-access-2vqpb") pod "f8d75eda-b51a-40fe-9239-745e16bf8614" (UID: "f8d75eda-b51a-40fe-9239-745e16bf8614"). InnerVolumeSpecName "kube-api-access-2vqpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.249827 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d75eda-b51a-40fe-9239-745e16bf8614-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.249866 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vqpb\" (UniqueName: \"kubernetes.io/projected/f8d75eda-b51a-40fe-9239-745e16bf8614-kube-api-access-2vqpb\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.338661 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.594140 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" event={"ID":"f8d75eda-b51a-40fe-9239-745e16bf8614","Type":"ContainerDied","Data":"af68effee10c85743deb663721be70bdbd80e1c13949534a8e13de1745fc80be"} Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.594184 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af68effee10c85743deb663721be70bdbd80e1c13949534a8e13de1745fc80be" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.594150 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher005d-account-delete-ct7lq" Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.766515 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z9wcj"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.775907 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z9wcj"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.783594 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher005d-account-delete-ct7lq"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.789749 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-005d-account-create-update-646cq"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.797599 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-005d-account-create-update-646cq"] Feb 19 08:24:32 crc kubenswrapper[5023]: I0219 08:24:32.811231 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher005d-account-delete-ct7lq"] Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.488520 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a41a04-662b-45da-98b7-32512a1396d3" path="/var/lib/kubelet/pods/10a41a04-662b-45da-98b7-32512a1396d3/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.489213 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" path="/var/lib/kubelet/pods/ba01942f-a3f7-4f5e-8793-b2f5f24ebb90/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.489884 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e60574-af41-4bda-9968-9eccb150f161" path="/var/lib/kubelet/pods/c7e60574-af41-4bda-9968-9eccb150f161/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.491168 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" path="/var/lib/kubelet/pods/e37c6ebe-c291-42ce-b082-67c5e054010d/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.491772 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7ace87-2a96-4d9d-bffd-ae72e694b353" path="/var/lib/kubelet/pods/ed7ace87-2a96-4d9d-bffd-ae72e694b353/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.492478 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d75eda-b51a-40fe-9239-745e16bf8614" path="/var/lib/kubelet/pods/f8d75eda-b51a-40fe-9239-745e16bf8614/volumes" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.604959 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-central-agent" containerID="cri-o://6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" gracePeriod=30 Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.604984 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="sg-core" containerID="cri-o://daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" gracePeriod=30 Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.605043 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-notification-agent" containerID="cri-o://d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" gracePeriod=30 Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.605059 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="proxy-httpd" containerID="cri-o://51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" gracePeriod=30 Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.898992 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7"] Feb 19 08:24:33 crc kubenswrapper[5023]: E0219 08:24:33.902823 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.902852 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" Feb 19 08:24:33 crc kubenswrapper[5023]: E0219 08:24:33.902868 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerName="watcher-applier" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.902875 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerName="watcher-applier" Feb 19 08:24:33 crc kubenswrapper[5023]: E0219 08:24:33.902888 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-kuttl-api-log" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.902897 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-kuttl-api-log" Feb 19 08:24:33 crc kubenswrapper[5023]: E0219 08:24:33.902912 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" containerName="watcher-decision-engine" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.902918 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" containerName="watcher-decision-engine" Feb 19 08:24:33 crc kubenswrapper[5023]: E0219 08:24:33.902942 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d75eda-b51a-40fe-9239-745e16bf8614" containerName="mariadb-account-delete" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.902948 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d75eda-b51a-40fe-9239-745e16bf8614" containerName="mariadb-account-delete" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.903240 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-api" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.903259 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37c6ebe-c291-42ce-b082-67c5e054010d" containerName="watcher-applier" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.903282 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e60574-af41-4bda-9968-9eccb150f161" containerName="watcher-kuttl-api-log" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.903296 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba01942f-a3f7-4f5e-8793-b2f5f24ebb90" containerName="watcher-decision-engine" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.903316 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d75eda-b51a-40fe-9239-745e16bf8614" containerName="mariadb-account-delete" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.913319 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.919708 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-tlxrw"] Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.921027 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.922540 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.935692 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tlxrw"] Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.957695 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7"] Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.983978 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.984074 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.984114 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8h2k\" (UniqueName: \"kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:33 crc kubenswrapper[5023]: I0219 08:24:33.984147 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jflq\" (UniqueName: \"kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.086096 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.086202 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.086276 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8h2k\" (UniqueName: \"kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.086310 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jflq\" (UniqueName: \"kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.087166 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.087355 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.107132 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8h2k\" (UniqueName: \"kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k\") pod \"watcher-b45c-account-create-update-6vdb7\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.127827 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jflq\" (UniqueName: \"kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq\") pod \"watcher-db-create-tlxrw\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.265308 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.279740 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.569031 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.617908 5023 generic.go:334] "Generic (PLEG): container finished" podID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" exitCode=0 Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.617941 5023 generic.go:334] "Generic (PLEG): container finished" podID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" exitCode=2 Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.617974 5023 generic.go:334] "Generic (PLEG): container finished" podID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" exitCode=0 Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.617983 5023 generic.go:334] "Generic (PLEG): container finished" podID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" exitCode=0 Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618004 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerDied","Data":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618030 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerDied","Data":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618042 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerDied","Data":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618051 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerDied","Data":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618060 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4e17256e-44e1-4a1a-becc-1df13cf2b66a","Type":"ContainerDied","Data":"9c531ab6c0945f545308bff6b1cb0c0205f5c38da61511d862955603dbfb9b41"} Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618076 5023 scope.go:117] "RemoveContainer" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.618215 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.679145 5023 scope.go:117] "RemoveContainer" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698313 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698389 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dc72\" (UniqueName: \"kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698430 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698502 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698528 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698548 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698564 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.698655 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs\") pod \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\" (UID: \"4e17256e-44e1-4a1a-becc-1df13cf2b66a\") " Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.701907 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.702141 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.710126 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts" (OuterVolumeSpecName: "scripts") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.710484 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72" (OuterVolumeSpecName: "kube-api-access-6dc72") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "kube-api-access-6dc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.728292 5023 scope.go:117] "RemoveContainer" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.745338 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.809108 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dc72\" (UniqueName: \"kubernetes.io/projected/4e17256e-44e1-4a1a-becc-1df13cf2b66a-kube-api-access-6dc72\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.809517 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.809531 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.809545 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e17256e-44e1-4a1a-becc-1df13cf2b66a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.809554 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.814546 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.816917 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7"] Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.827891 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.829202 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tlxrw"] Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.885290 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data" (OuterVolumeSpecName: "config-data") pod "4e17256e-44e1-4a1a-becc-1df13cf2b66a" (UID: "4e17256e-44e1-4a1a-becc-1df13cf2b66a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.923117 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.923154 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.923165 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e17256e-44e1-4a1a-becc-1df13cf2b66a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.938787 5023 scope.go:117] "RemoveContainer" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.967792 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.979005 5023 scope.go:117] "RemoveContainer" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: E0219 08:24:34.981177 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": container with ID starting with 51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241 not found: ID does not exist" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.981356 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} err="failed to get container status \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": rpc error: code = NotFound desc = could not find container \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": container with ID starting with 51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.981477 5023 scope.go:117] "RemoveContainer" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.982566 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:34 crc kubenswrapper[5023]: E0219 08:24:34.984543 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": container with ID starting with daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a not found: ID does not exist" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.984719 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} err="failed to get container status \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": rpc error: code = NotFound desc = could not find container \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": container with ID starting with daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.984849 5023 scope.go:117] "RemoveContainer" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:34 crc kubenswrapper[5023]: E0219 08:24:34.985279 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": container with ID starting with d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10 not found: ID does not exist" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.985401 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} err="failed to get container status \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": rpc error: code = NotFound desc = could not find container \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": container with ID starting with d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.985507 5023 scope.go:117] "RemoveContainer" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:34 crc kubenswrapper[5023]: E0219 08:24:34.985871 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": container with ID starting with 6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c not found: ID does not exist" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.986040 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} err="failed to get container status \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": rpc error: code = NotFound desc = could not find container \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": container with ID starting with 6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.986152 5023 scope.go:117] "RemoveContainer" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.995489 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} err="failed to get container status \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": rpc error: code = NotFound desc = could not find container \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": container with ID starting with 51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.995564 5023 scope.go:117] "RemoveContainer" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.996060 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} err="failed to get container status \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": rpc error: code = NotFound desc = could not find container \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": container with ID starting with daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.996081 5023 scope.go:117] "RemoveContainer" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.997168 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} err="failed to get container status \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": rpc error: code = NotFound desc = could not find container \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": container with ID starting with d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.997191 5023 scope.go:117] "RemoveContainer" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.997607 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} err="failed to get container status \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": rpc error: code = NotFound desc = could not find container \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": container with ID starting with 6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.997684 5023 scope.go:117] "RemoveContainer" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998060 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} err="failed to get container status \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": rpc error: code = NotFound desc = could not find container \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": container with ID starting with 51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998080 5023 scope.go:117] "RemoveContainer" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998512 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} err="failed to get container status \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": rpc error: code = NotFound desc = could not find container \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": container with ID starting with daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998539 5023 scope.go:117] "RemoveContainer" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998842 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} err="failed to get container status \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": rpc error: code = NotFound desc = could not find container \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": container with ID starting with d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.998863 5023 scope.go:117] "RemoveContainer" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999211 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} err="failed to get container status \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": rpc error: code = NotFound desc = could not find container \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": container with ID starting with 6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999282 5023 scope.go:117] "RemoveContainer" containerID="51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999667 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241"} err="failed to get container status \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": rpc error: code = NotFound desc = could not find container \"51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241\": container with ID starting with 51a2f8359de212f3219fc797805c7a9296561059d6f4e28725e339ea9fdc6241 not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999691 5023 scope.go:117] "RemoveContainer" containerID="daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999958 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a"} err="failed to get container status \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": rpc error: code = NotFound desc = could not find container \"daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a\": container with ID starting with daeadf3628645efa3c4ad7c52bb3cabbd0033c17be6cacd50447eebd11ea2c2a not found: ID does not exist" Feb 19 08:24:34 crc kubenswrapper[5023]: I0219 08:24:34.999985 5023 scope.go:117] "RemoveContainer" containerID="d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.000169 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10"} err="failed to get container status \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": rpc error: code = NotFound desc = could not find container \"d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10\": container with ID starting with d9815b9b89a6cdc3f2508713ba708838d6431b808c7d70b78fdaf9efe2f34f10 not found: ID does not exist" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.000199 5023 scope.go:117] "RemoveContainer" containerID="6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.000357 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c"} err="failed to get container status \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": rpc error: code = NotFound desc = could not find container \"6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c\": container with ID starting with 6e603c87b58607f892a80c0eec13f1d677ffa1b084a6c81f383b3fe126b4216c not found: ID does not exist" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.009983 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:35 crc kubenswrapper[5023]: E0219 08:24:35.010559 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="sg-core" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010582 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="sg-core" Feb 19 08:24:35 crc kubenswrapper[5023]: E0219 08:24:35.010600 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-notification-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010608 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-notification-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: E0219 08:24:35.010641 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="proxy-httpd" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010648 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="proxy-httpd" Feb 19 08:24:35 crc kubenswrapper[5023]: E0219 08:24:35.010658 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-central-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010664 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-central-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010842 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-central-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010859 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="ceilometer-notification-agent" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010876 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="proxy-httpd" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.010888 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" containerName="sg-core" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.012655 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.016339 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.016647 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.016688 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.023321 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.125822 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.125888 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv28h\" (UniqueName: \"kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.125920 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.125966 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.126090 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.126183 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.126220 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.126272 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.227700 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228004 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228092 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228238 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228331 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228463 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228538 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv28h\" (UniqueName: \"kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.228611 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.229016 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.229030 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.233251 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.233344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.234425 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.235343 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.239565 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.249379 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv28h\" (UniqueName: \"kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h\") pod \"ceilometer-0\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.366534 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.502356 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e17256e-44e1-4a1a-becc-1df13cf2b66a" path="/var/lib/kubelet/pods/4e17256e-44e1-4a1a-becc-1df13cf2b66a/volumes" Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.632842 5023 generic.go:334] "Generic (PLEG): container finished" podID="da273fe4-2d94-45fa-a45d-3f3e77cb8082" containerID="bd7b11302154249b241025202eb6e84dc3959a0426143162c18b884992596735" exitCode=0 Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.632958 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" event={"ID":"da273fe4-2d94-45fa-a45d-3f3e77cb8082","Type":"ContainerDied","Data":"bd7b11302154249b241025202eb6e84dc3959a0426143162c18b884992596735"} Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.633003 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" event={"ID":"da273fe4-2d94-45fa-a45d-3f3e77cb8082","Type":"ContainerStarted","Data":"804f0c37aa7073becae3db1087e461545dd3210109e9b2f9fdef678f45b7aec7"} Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.634989 5023 generic.go:334] "Generic (PLEG): container finished" podID="e5555e0c-d705-4ff7-842f-96152050d5d5" containerID="ce8862942dbe5381269645a4ca9e70fc1bee2d4282900dc4bb71343766fd619b" exitCode=0 Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.635051 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tlxrw" event={"ID":"e5555e0c-d705-4ff7-842f-96152050d5d5","Type":"ContainerDied","Data":"ce8862942dbe5381269645a4ca9e70fc1bee2d4282900dc4bb71343766fd619b"} Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.635086 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tlxrw" event={"ID":"e5555e0c-d705-4ff7-842f-96152050d5d5","Type":"ContainerStarted","Data":"aeaa529722aadda0b9e1cc3f450bd4d260043010f0f49b0162f8d3fddbc476ff"} Feb 19 08:24:35 crc kubenswrapper[5023]: I0219 08:24:35.839495 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:35 crc kubenswrapper[5023]: W0219 08:24:35.841970 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda89eac24_2c7e_4194_8c7f_9063887084ba.slice/crio-7ad2105a016d52fb22ebfbd14f239683615f02650a598de2aadf2881005058d4 WatchSource:0}: Error finding container 7ad2105a016d52fb22ebfbd14f239683615f02650a598de2aadf2881005058d4: Status 404 returned error can't find the container with id 7ad2105a016d52fb22ebfbd14f239683615f02650a598de2aadf2881005058d4 Feb 19 08:24:36 crc kubenswrapper[5023]: I0219 08:24:36.643479 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerStarted","Data":"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1"} Feb 19 08:24:36 crc kubenswrapper[5023]: I0219 08:24:36.643844 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerStarted","Data":"7ad2105a016d52fb22ebfbd14f239683615f02650a598de2aadf2881005058d4"} Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.066883 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.072266 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.169270 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts\") pod \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.169367 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts\") pod \"e5555e0c-d705-4ff7-842f-96152050d5d5\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.169514 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8h2k\" (UniqueName: \"kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k\") pod \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\" (UID: \"da273fe4-2d94-45fa-a45d-3f3e77cb8082\") " Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.169592 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jflq\" (UniqueName: \"kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq\") pod \"e5555e0c-d705-4ff7-842f-96152050d5d5\" (UID: \"e5555e0c-d705-4ff7-842f-96152050d5d5\") " Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.170164 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da273fe4-2d94-45fa-a45d-3f3e77cb8082" (UID: "da273fe4-2d94-45fa-a45d-3f3e77cb8082"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.170240 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5555e0c-d705-4ff7-842f-96152050d5d5" (UID: "e5555e0c-d705-4ff7-842f-96152050d5d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.170651 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da273fe4-2d94-45fa-a45d-3f3e77cb8082-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.170676 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5555e0c-d705-4ff7-842f-96152050d5d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.176510 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq" (OuterVolumeSpecName: "kube-api-access-2jflq") pod "e5555e0c-d705-4ff7-842f-96152050d5d5" (UID: "e5555e0c-d705-4ff7-842f-96152050d5d5"). InnerVolumeSpecName "kube-api-access-2jflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.187855 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k" (OuterVolumeSpecName: "kube-api-access-h8h2k") pod "da273fe4-2d94-45fa-a45d-3f3e77cb8082" (UID: "da273fe4-2d94-45fa-a45d-3f3e77cb8082"). InnerVolumeSpecName "kube-api-access-h8h2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.272425 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8h2k\" (UniqueName: \"kubernetes.io/projected/da273fe4-2d94-45fa-a45d-3f3e77cb8082-kube-api-access-h8h2k\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.272465 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jflq\" (UniqueName: \"kubernetes.io/projected/e5555e0c-d705-4ff7-842f-96152050d5d5-kube-api-access-2jflq\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.652819 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tlxrw" event={"ID":"e5555e0c-d705-4ff7-842f-96152050d5d5","Type":"ContainerDied","Data":"aeaa529722aadda0b9e1cc3f450bd4d260043010f0f49b0162f8d3fddbc476ff"} Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.653324 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeaa529722aadda0b9e1cc3f450bd4d260043010f0f49b0162f8d3fddbc476ff" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.653382 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tlxrw" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.655431 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" event={"ID":"da273fe4-2d94-45fa-a45d-3f3e77cb8082","Type":"ContainerDied","Data":"804f0c37aa7073becae3db1087e461545dd3210109e9b2f9fdef678f45b7aec7"} Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.655455 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804f0c37aa7073becae3db1087e461545dd3210109e9b2f9fdef678f45b7aec7" Feb 19 08:24:37 crc kubenswrapper[5023]: I0219 08:24:37.655496 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7" Feb 19 08:24:38 crc kubenswrapper[5023]: I0219 08:24:38.665350 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerStarted","Data":"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32"} Feb 19 08:24:38 crc kubenswrapper[5023]: I0219 08:24:38.665674 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerStarted","Data":"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312"} Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.301501 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr"] Feb 19 08:24:39 crc kubenswrapper[5023]: E0219 08:24:39.302121 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da273fe4-2d94-45fa-a45d-3f3e77cb8082" containerName="mariadb-account-create-update" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.302141 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="da273fe4-2d94-45fa-a45d-3f3e77cb8082" containerName="mariadb-account-create-update" Feb 19 08:24:39 crc kubenswrapper[5023]: E0219 08:24:39.302165 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5555e0c-d705-4ff7-842f-96152050d5d5" containerName="mariadb-database-create" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.302172 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5555e0c-d705-4ff7-842f-96152050d5d5" containerName="mariadb-database-create" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.302332 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="da273fe4-2d94-45fa-a45d-3f3e77cb8082" containerName="mariadb-account-create-update" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.302355 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5555e0c-d705-4ff7-842f-96152050d5d5" containerName="mariadb-database-create" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.302926 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.312828 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr"] Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.313357 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mgnch" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.313662 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.415919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.416000 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.416041 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.416077 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25gx\" (UniqueName: \"kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.517442 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.517526 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.517573 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.517609 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25gx\" (UniqueName: \"kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.522821 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.531053 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.532179 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.534995 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25gx\" (UniqueName: \"kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx\") pod \"watcher-kuttl-db-sync-pqbrr\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:39 crc kubenswrapper[5023]: I0219 08:24:39.623560 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.240122 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr"] Feb 19 08:24:40 crc kubenswrapper[5023]: W0219 08:24:40.242791 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1636bd3e_1de3_4efb_addd_1bd9c65ad48b.slice/crio-8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb WatchSource:0}: Error finding container 8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb: Status 404 returned error can't find the container with id 8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.684564 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" event={"ID":"1636bd3e-1de3-4efb-addd-1bd9c65ad48b","Type":"ContainerStarted","Data":"ce9e08f4a7334dda4fb865562f1a3e9634566ad5e16bf6d7e9318ecd875f7527"} Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.684894 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" event={"ID":"1636bd3e-1de3-4efb-addd-1bd9c65ad48b","Type":"ContainerStarted","Data":"8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb"} Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.686941 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerStarted","Data":"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d"} Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.687470 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.706112 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" podStartSLOduration=1.706094456 podStartE2EDuration="1.706094456s" podCreationTimestamp="2026-02-19 08:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:40.703784844 +0000 UTC m=+1438.360903792" watchObservedRunningTime="2026-02-19 08:24:40.706094456 +0000 UTC m=+1438.363213404" Feb 19 08:24:40 crc kubenswrapper[5023]: I0219 08:24:40.721068 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.8789175350000002 podStartE2EDuration="6.721049922s" podCreationTimestamp="2026-02-19 08:24:34 +0000 UTC" firstStartedPulling="2026-02-19 08:24:35.844588545 +0000 UTC m=+1433.501707493" lastFinishedPulling="2026-02-19 08:24:39.686720932 +0000 UTC m=+1437.343839880" observedRunningTime="2026-02-19 08:24:40.719356517 +0000 UTC m=+1438.376475465" watchObservedRunningTime="2026-02-19 08:24:40.721049922 +0000 UTC m=+1438.378168860" Feb 19 08:24:41 crc kubenswrapper[5023]: I0219 08:24:41.871111 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:24:41 crc kubenswrapper[5023]: I0219 08:24:41.871193 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:24:43 crc kubenswrapper[5023]: I0219 08:24:43.722364 5023 generic.go:334] "Generic (PLEG): container finished" podID="1636bd3e-1de3-4efb-addd-1bd9c65ad48b" containerID="ce9e08f4a7334dda4fb865562f1a3e9634566ad5e16bf6d7e9318ecd875f7527" exitCode=0 Feb 19 08:24:43 crc kubenswrapper[5023]: I0219 08:24:43.722452 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" event={"ID":"1636bd3e-1de3-4efb-addd-1bd9c65ad48b","Type":"ContainerDied","Data":"ce9e08f4a7334dda4fb865562f1a3e9634566ad5e16bf6d7e9318ecd875f7527"} Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.414820 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.525447 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data\") pod \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.525905 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25gx\" (UniqueName: \"kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx\") pod \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.525989 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle\") pod \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.526054 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data\") pod \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\" (UID: \"1636bd3e-1de3-4efb-addd-1bd9c65ad48b\") " Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.550603 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx" (OuterVolumeSpecName: "kube-api-access-h25gx") pod "1636bd3e-1de3-4efb-addd-1bd9c65ad48b" (UID: "1636bd3e-1de3-4efb-addd-1bd9c65ad48b"). InnerVolumeSpecName "kube-api-access-h25gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.550753 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1636bd3e-1de3-4efb-addd-1bd9c65ad48b" (UID: "1636bd3e-1de3-4efb-addd-1bd9c65ad48b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.559902 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1636bd3e-1de3-4efb-addd-1bd9c65ad48b" (UID: "1636bd3e-1de3-4efb-addd-1bd9c65ad48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.589139 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data" (OuterVolumeSpecName: "config-data") pod "1636bd3e-1de3-4efb-addd-1bd9c65ad48b" (UID: "1636bd3e-1de3-4efb-addd-1bd9c65ad48b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.629029 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.629084 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.629098 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25gx\" (UniqueName: \"kubernetes.io/projected/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-kube-api-access-h25gx\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.629126 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1636bd3e-1de3-4efb-addd-1bd9c65ad48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.740817 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" event={"ID":"1636bd3e-1de3-4efb-addd-1bd9c65ad48b","Type":"ContainerDied","Data":"8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb"} Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.740860 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f412b53a51d98d7edfff7d03a041eae09ba6e5370cb204b07ef770c529cfccb" Feb 19 08:24:45 crc kubenswrapper[5023]: I0219 08:24:45.740932 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.014111 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: E0219 08:24:46.014525 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1636bd3e-1de3-4efb-addd-1bd9c65ad48b" containerName="watcher-kuttl-db-sync" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.014542 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1636bd3e-1de3-4efb-addd-1bd9c65ad48b" containerName="watcher-kuttl-db-sync" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.014776 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1636bd3e-1de3-4efb-addd-1bd9c65ad48b" containerName="watcher-kuttl-db-sync" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.016018 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.026296 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.027291 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.027390 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mgnch" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.034567 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.034843 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.035328 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.042956 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.047271 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.053609 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136023 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkjk\" (UniqueName: \"kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136087 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136129 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136153 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136186 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136204 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136226 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136244 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136377 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136449 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.136474 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmfc\" (UniqueName: \"kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.159994 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.160975 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.162923 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.171381 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237586 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237647 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237670 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237698 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237727 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237748 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237765 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237783 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjmfc\" (UniqueName: \"kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237828 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237846 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkjk\" (UniqueName: \"kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237873 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237903 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44zc\" (UniqueName: \"kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237940 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237966 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.237985 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.239039 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.239060 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.243803 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.244117 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.244327 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.244438 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.244550 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.244867 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.246248 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.255921 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkjk\" (UniqueName: \"kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk\") pod \"watcher-kuttl-api-0\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.261366 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjmfc\" (UniqueName: \"kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc\") pod \"watcher-kuttl-applier-0\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.334554 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.339013 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.339070 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z44zc\" (UniqueName: \"kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.339120 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.339143 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.339170 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.343443 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.343495 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.343748 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.344482 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.346675 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.361093 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44zc\" (UniqueName: \"kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.475513 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:46 crc kubenswrapper[5023]: I0219 08:24:46.931362 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.064428 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.071648 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.769704 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerStarted","Data":"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.771092 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerStarted","Data":"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.771691 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerStarted","Data":"f5e292203a916d3f5d0050c63b0fa12b2485adef66e2ec5f48fb7a15eed09103"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.771821 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.773998 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"58acaab1-f2eb-4504-90db-42c824ac37f6","Type":"ContainerStarted","Data":"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.774226 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"58acaab1-f2eb-4504-90db-42c824ac37f6","Type":"ContainerStarted","Data":"a5f8961a4a49539740d88f30fab5b938728d02838fbe2888db2e2010883cd726"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.775651 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2e314174-5790-4126-8add-b68dab9c52e3","Type":"ContainerStarted","Data":"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.775677 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2e314174-5790-4126-8add-b68dab9c52e3","Type":"ContainerStarted","Data":"6f8de8b90d7b37b04d206e4fe79588c04c45fdf6ac2e5672308316040a8f56d4"} Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.801024 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.801002255 podStartE2EDuration="2.801002255s" podCreationTimestamp="2026-02-19 08:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:47.790737213 +0000 UTC m=+1445.447856161" watchObservedRunningTime="2026-02-19 08:24:47.801002255 +0000 UTC m=+1445.458121203" Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.815988 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.8159724019999999 podStartE2EDuration="1.815972402s" podCreationTimestamp="2026-02-19 08:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:47.810538378 +0000 UTC m=+1445.467657326" watchObservedRunningTime="2026-02-19 08:24:47.815972402 +0000 UTC m=+1445.473091350" Feb 19 08:24:47 crc kubenswrapper[5023]: I0219 08:24:47.830793 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.830775665 podStartE2EDuration="1.830775665s" podCreationTimestamp="2026-02-19 08:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:24:47.829595243 +0000 UTC m=+1445.486714191" watchObservedRunningTime="2026-02-19 08:24:47.830775665 +0000 UTC m=+1445.487894613" Feb 19 08:24:49 crc kubenswrapper[5023]: I0219 08:24:49.791990 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:24:50 crc kubenswrapper[5023]: I0219 08:24:50.108845 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:51 crc kubenswrapper[5023]: I0219 08:24:51.335978 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:51 crc kubenswrapper[5023]: I0219 08:24:51.348306 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.335655 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.347920 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.359732 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.393550 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.476425 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.502022 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.848994 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.867201 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.877102 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:24:56 crc kubenswrapper[5023]: I0219 08:24:56.879251 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.780478 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.781208 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-central-agent" containerID="cri-o://7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1" gracePeriod=30 Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.781830 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="proxy-httpd" containerID="cri-o://c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d" gracePeriod=30 Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.782061 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-notification-agent" containerID="cri-o://03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312" gracePeriod=30 Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.782092 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="sg-core" containerID="cri-o://9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32" gracePeriod=30 Feb 19 08:24:59 crc kubenswrapper[5023]: I0219 08:24:59.904784 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.174:3000/\": EOF" Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.896932 5023 generic.go:334] "Generic (PLEG): container finished" podID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerID="c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d" exitCode=0 Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.898250 5023 generic.go:334] "Generic (PLEG): container finished" podID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerID="9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32" exitCode=2 Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.898328 5023 generic.go:334] "Generic (PLEG): container finished" podID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerID="7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1" exitCode=0 Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.897005 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerDied","Data":"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d"} Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.898489 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerDied","Data":"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32"} Feb 19 08:25:00 crc kubenswrapper[5023]: I0219 08:25:00.898592 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerDied","Data":"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1"} Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.579920 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681479 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681542 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681585 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681701 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681723 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681767 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv28h\" (UniqueName: \"kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681783 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.681812 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data\") pod \"a89eac24-2c7e-4194-8c7f-9063887084ba\" (UID: \"a89eac24-2c7e-4194-8c7f-9063887084ba\") " Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.682448 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.682464 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.683082 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.683204 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89eac24-2c7e-4194-8c7f-9063887084ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.686758 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts" (OuterVolumeSpecName: "scripts") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.687211 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h" (OuterVolumeSpecName: "kube-api-access-cv28h") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "kube-api-access-cv28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.707203 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.726740 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.744800 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.774784 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data" (OuterVolumeSpecName: "config-data") pod "a89eac24-2c7e-4194-8c7f-9063887084ba" (UID: "a89eac24-2c7e-4194-8c7f-9063887084ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784804 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784841 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv28h\" (UniqueName: \"kubernetes.io/projected/a89eac24-2c7e-4194-8c7f-9063887084ba-kube-api-access-cv28h\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784857 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784865 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784874 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.784884 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89eac24-2c7e-4194-8c7f-9063887084ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933357 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:04 crc kubenswrapper[5023]: E0219 08:25:04.933726 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="proxy-httpd" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933745 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="proxy-httpd" Feb 19 08:25:04 crc kubenswrapper[5023]: E0219 08:25:04.933776 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-notification-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933786 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-notification-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: E0219 08:25:04.933800 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-central-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933807 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-central-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: E0219 08:25:04.933832 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="sg-core" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933841 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="sg-core" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933986 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-central-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.933996 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="ceilometer-notification-agent" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.934008 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="sg-core" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.934016 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerName="proxy-httpd" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.935164 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.948352 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.951035 5023 generic.go:334] "Generic (PLEG): container finished" podID="a89eac24-2c7e-4194-8c7f-9063887084ba" containerID="03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312" exitCode=0 Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.951076 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerDied","Data":"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312"} Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.951103 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a89eac24-2c7e-4194-8c7f-9063887084ba","Type":"ContainerDied","Data":"7ad2105a016d52fb22ebfbd14f239683615f02650a598de2aadf2881005058d4"} Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.951126 5023 scope.go:117] "RemoveContainer" containerID="c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.951294 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.990090 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.990171 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.990233 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69m97\" (UniqueName: \"kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:04 crc kubenswrapper[5023]: I0219 08:25:04.990858 5023 scope.go:117] "RemoveContainer" containerID="9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.006656 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.018428 5023 scope.go:117] "RemoveContainer" containerID="03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.018945 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.034502 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.036595 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.037976 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.044022 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.044302 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.044676 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.056571 5023 scope.go:117] "RemoveContainer" containerID="7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.083499 5023 scope.go:117] "RemoveContainer" containerID="c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d" Feb 19 08:25:05 crc kubenswrapper[5023]: E0219 08:25:05.083810 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d\": container with ID starting with c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d not found: ID does not exist" containerID="c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.083840 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d"} err="failed to get container status \"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d\": rpc error: code = NotFound desc = could not find container \"c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d\": container with ID starting with c6587516a76f98dc48f5095950e79471f5f0d14a5620a231944e9c2571d5802d not found: ID does not exist" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.083861 5023 scope.go:117] "RemoveContainer" containerID="9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32" Feb 19 08:25:05 crc kubenswrapper[5023]: E0219 08:25:05.084406 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32\": container with ID starting with 9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32 not found: ID does not exist" containerID="9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.084427 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32"} err="failed to get container status \"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32\": rpc error: code = NotFound desc = could not find container \"9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32\": container with ID starting with 9cb4bf30d9b0e5e2b430e1fc7ae9e32b4675c9df6023422c05be0d494b85be32 not found: ID does not exist" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.084438 5023 scope.go:117] "RemoveContainer" containerID="03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312" Feb 19 08:25:05 crc kubenswrapper[5023]: E0219 08:25:05.084928 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312\": container with ID starting with 03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312 not found: ID does not exist" containerID="03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.084951 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312"} err="failed to get container status \"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312\": rpc error: code = NotFound desc = could not find container \"03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312\": container with ID starting with 03fbde644e3010cb5de15a0068cba48ad84897adcf05e03448e7f71d3563f312 not found: ID does not exist" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.084966 5023 scope.go:117] "RemoveContainer" containerID="7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1" Feb 19 08:25:05 crc kubenswrapper[5023]: E0219 08:25:05.085222 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1\": container with ID starting with 7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1 not found: ID does not exist" containerID="7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.085246 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1"} err="failed to get container status \"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1\": rpc error: code = NotFound desc = could not find container \"7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1\": container with ID starting with 7cac2fedd5dee80c7960e8a22c92104e19093dc4371ed9aa520244e03bc8e0c1 not found: ID does not exist" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.091864 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.091910 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.091941 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.091979 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092002 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69m97\" (UniqueName: \"kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092034 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092064 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092080 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092094 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092117 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhhj\" (UniqueName: \"kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092148 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.092558 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.093139 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.112368 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69m97\" (UniqueName: \"kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97\") pod \"redhat-operators-k8kdl\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.193477 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.193555 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.193628 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.193668 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.194033 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.194075 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.194103 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhhj\" (UniqueName: \"kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.194154 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.197197 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.197315 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.198712 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.199959 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.200211 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.200680 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.202364 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.216992 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhhj\" (UniqueName: \"kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj\") pod \"ceilometer-0\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.266230 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.354230 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.495167 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89eac24-2c7e-4194-8c7f-9063887084ba" path="/var/lib/kubelet/pods/a89eac24-2c7e-4194-8c7f-9063887084ba/volumes" Feb 19 08:25:05 crc kubenswrapper[5023]: W0219 08:25:05.756575 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd598331_2c6c_4568_91e4_f8ee7e01fe3b.slice/crio-c9432bd19ead786b6a66e32b444ab9730cd2e7b3fe4deb496bd515196f7f9ae3 WatchSource:0}: Error finding container c9432bd19ead786b6a66e32b444ab9730cd2e7b3fe4deb496bd515196f7f9ae3: Status 404 returned error can't find the container with id c9432bd19ead786b6a66e32b444ab9730cd2e7b3fe4deb496bd515196f7f9ae3 Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.758418 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.925873 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:25:05 crc kubenswrapper[5023]: W0219 08:25:05.926089 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05966302_ca1a_4ac5_a1a3_fa36220e8452.slice/crio-9ba1a4c0efe2e920ed38227ba938d02023c4ba6fff775b48a17721c77f7d14ff WatchSource:0}: Error finding container 9ba1a4c0efe2e920ed38227ba938d02023c4ba6fff775b48a17721c77f7d14ff: Status 404 returned error can't find the container with id 9ba1a4c0efe2e920ed38227ba938d02023c4ba6fff775b48a17721c77f7d14ff Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.966374 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerID="d33ea2172fc6cd5cf84cce5c85f9f20b883b845b7385486a3e1f12f60c397801" exitCode=0 Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.966434 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerDied","Data":"d33ea2172fc6cd5cf84cce5c85f9f20b883b845b7385486a3e1f12f60c397801"} Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.966458 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerStarted","Data":"c9432bd19ead786b6a66e32b444ab9730cd2e7b3fe4deb496bd515196f7f9ae3"} Feb 19 08:25:05 crc kubenswrapper[5023]: I0219 08:25:05.969503 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerStarted","Data":"9ba1a4c0efe2e920ed38227ba938d02023c4ba6fff775b48a17721c77f7d14ff"} Feb 19 08:25:06 crc kubenswrapper[5023]: I0219 08:25:06.978731 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerStarted","Data":"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a"} Feb 19 08:25:06 crc kubenswrapper[5023]: I0219 08:25:06.980851 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerStarted","Data":"506b7898e91bdfaf488d4cb140b05676657e104e7e163087f52d9f8b6c0a2690"} Feb 19 08:25:07 crc kubenswrapper[5023]: I0219 08:25:07.990422 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerID="506b7898e91bdfaf488d4cb140b05676657e104e7e163087f52d9f8b6c0a2690" exitCode=0 Feb 19 08:25:07 crc kubenswrapper[5023]: I0219 08:25:07.990523 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerDied","Data":"506b7898e91bdfaf488d4cb140b05676657e104e7e163087f52d9f8b6c0a2690"} Feb 19 08:25:08 crc kubenswrapper[5023]: I0219 08:25:08.008007 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerStarted","Data":"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53"} Feb 19 08:25:09 crc kubenswrapper[5023]: I0219 08:25:09.018995 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerStarted","Data":"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5"} Feb 19 08:25:09 crc kubenswrapper[5023]: I0219 08:25:09.021852 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerStarted","Data":"73a63518502ca6d01df21dcdac84013cfcf1ba05dd61f42d97347159badf0567"} Feb 19 08:25:09 crc kubenswrapper[5023]: I0219 08:25:09.051280 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8kdl" podStartSLOduration=2.454798227 podStartE2EDuration="5.051260987s" podCreationTimestamp="2026-02-19 08:25:04 +0000 UTC" firstStartedPulling="2026-02-19 08:25:05.968340432 +0000 UTC m=+1463.625459380" lastFinishedPulling="2026-02-19 08:25:08.564803192 +0000 UTC m=+1466.221922140" observedRunningTime="2026-02-19 08:25:09.046003788 +0000 UTC m=+1466.703122736" watchObservedRunningTime="2026-02-19 08:25:09.051260987 +0000 UTC m=+1466.708379935" Feb 19 08:25:11 crc kubenswrapper[5023]: I0219 08:25:11.870807 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:25:11 crc kubenswrapper[5023]: I0219 08:25:11.871394 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:25:11 crc kubenswrapper[5023]: I0219 08:25:11.871451 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:25:11 crc kubenswrapper[5023]: I0219 08:25:11.872196 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:25:11 crc kubenswrapper[5023]: I0219 08:25:11.872244 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6" gracePeriod=600 Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.049420 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6" exitCode=0 Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.049744 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6"} Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.049930 5023 scope.go:117] "RemoveContainer" containerID="382a9da75f766d6a7fa79de0344e2f00ca61a6303d2cd1d90193c5d3204c10cf" Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.055402 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerStarted","Data":"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b"} Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.055871 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:12 crc kubenswrapper[5023]: I0219 08:25:12.079221 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.470875204 podStartE2EDuration="8.079198935s" podCreationTimestamp="2026-02-19 08:25:04 +0000 UTC" firstStartedPulling="2026-02-19 08:25:05.928641889 +0000 UTC m=+1463.585760837" lastFinishedPulling="2026-02-19 08:25:11.53696562 +0000 UTC m=+1469.194084568" observedRunningTime="2026-02-19 08:25:12.074300305 +0000 UTC m=+1469.731419253" watchObservedRunningTime="2026-02-19 08:25:12.079198935 +0000 UTC m=+1469.736317883" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.065022 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848"} Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.130927 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.133471 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.172543 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.223716 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.224105 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.224271 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hppv\" (UniqueName: \"kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.326607 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.327081 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.327082 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.327175 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hppv\" (UniqueName: \"kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.327851 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.348842 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hppv\" (UniqueName: \"kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv\") pod \"redhat-marketplace-x5z8b\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.453797 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:13 crc kubenswrapper[5023]: I0219 08:25:13.940961 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:13 crc kubenswrapper[5023]: W0219 08:25:13.946065 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42161976_e3ff_4d82_905c_99420be2c7c4.slice/crio-86576a505849606b70dd3a66e50aa0a8754ee22cca157c5bb305a7d52141837b WatchSource:0}: Error finding container 86576a505849606b70dd3a66e50aa0a8754ee22cca157c5bb305a7d52141837b: Status 404 returned error can't find the container with id 86576a505849606b70dd3a66e50aa0a8754ee22cca157c5bb305a7d52141837b Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.075344 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerStarted","Data":"86576a505849606b70dd3a66e50aa0a8754ee22cca157c5bb305a7d52141837b"} Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.543892 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.544184 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="948974f6-c39b-4658-a16c-9d76e6517e3f" containerName="memcached" containerID="cri-o://8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8" gracePeriod=30 Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.648426 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.649021 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-kuttl-api-log" containerID="cri-o://2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb" gracePeriod=30 Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.649067 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-api" containerID="cri-o://3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456" gracePeriod=30 Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.656198 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.656416 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerName="watcher-applier" containerID="cri-o://db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" gracePeriod=30 Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.673674 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.673944 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="2e314174-5790-4126-8add-b68dab9c52e3" containerName="watcher-decision-engine" containerID="cri-o://59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" gracePeriod=30 Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.872071 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-gxdj8"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.907583 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-gxdj8"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.966440 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-zmtgl"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.967475 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-zmtgl"] Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.967554 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.983435 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 19 08:25:14 crc kubenswrapper[5023]: I0219 08:25:14.983613 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070479 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070537 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070587 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtbcd\" (UniqueName: \"kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070643 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070666 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.070712 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.103876 5023 generic.go:334] "Generic (PLEG): container finished" podID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerID="2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb" exitCode=143 Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.103967 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerDied","Data":"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb"} Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.120929 5023 generic.go:334] "Generic (PLEG): container finished" podID="42161976-e3ff-4d82-905c-99420be2c7c4" containerID="8322a27fcdc471f860ffd5d572205296d94d253d4d470ae68863e188c9b5bc65" exitCode=0 Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.120973 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerDied","Data":"8322a27fcdc471f860ffd5d572205296d94d253d4d470ae68863e188c9b5bc65"} Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172535 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172613 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172675 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172702 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172743 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtbcd\" (UniqueName: \"kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172767 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.172790 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.188384 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.195065 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.195359 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.195486 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.200252 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.225252 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtbcd\" (UniqueName: \"kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.225520 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts\") pod \"keystone-bootstrap-zmtgl\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.266649 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.269813 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.296963 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.492971 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb" path="/var/lib/kubelet/pods/72e083b8-6cdf-4a4a-9bb6-e7f20b6d5ffb/volumes" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.710828 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.797545 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle\") pod \"948974f6-c39b-4658-a16c-9d76e6517e3f\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.797685 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs\") pod \"948974f6-c39b-4658-a16c-9d76e6517e3f\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.797726 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config\") pod \"948974f6-c39b-4658-a16c-9d76e6517e3f\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.797784 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data\") pod \"948974f6-c39b-4658-a16c-9d76e6517e3f\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.797818 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4d9\" (UniqueName: \"kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9\") pod \"948974f6-c39b-4658-a16c-9d76e6517e3f\" (UID: \"948974f6-c39b-4658-a16c-9d76e6517e3f\") " Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.799772 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "948974f6-c39b-4658-a16c-9d76e6517e3f" (UID: "948974f6-c39b-4658-a16c-9d76e6517e3f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.800497 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data" (OuterVolumeSpecName: "config-data") pod "948974f6-c39b-4658-a16c-9d76e6517e3f" (UID: "948974f6-c39b-4658-a16c-9d76e6517e3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.805767 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9" (OuterVolumeSpecName: "kube-api-access-7v4d9") pod "948974f6-c39b-4658-a16c-9d76e6517e3f" (UID: "948974f6-c39b-4658-a16c-9d76e6517e3f"). InnerVolumeSpecName "kube-api-access-7v4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.857102 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948974f6-c39b-4658-a16c-9d76e6517e3f" (UID: "948974f6-c39b-4658-a16c-9d76e6517e3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.865305 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "948974f6-c39b-4658-a16c-9d76e6517e3f" (UID: "948974f6-c39b-4658-a16c-9d76e6517e3f"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.900381 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.900411 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4d9\" (UniqueName: \"kubernetes.io/projected/948974f6-c39b-4658-a16c-9d76e6517e3f-kube-api-access-7v4d9\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.900423 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.900435 5023 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/948974f6-c39b-4658-a16c-9d76e6517e3f-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.900447 5023 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/948974f6-c39b-4658-a16c-9d76e6517e3f-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:15 crc kubenswrapper[5023]: I0219 08:25:15.904772 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-zmtgl"] Feb 19 08:25:15 crc kubenswrapper[5023]: W0219 08:25:15.912558 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02bef69f_54ef_460f_aa22_3ac64259b621.slice/crio-67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c WatchSource:0}: Error finding container 67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c: Status 404 returned error can't find the container with id 67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.130592 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" event={"ID":"02bef69f-54ef-460f-aa22-3ac64259b621","Type":"ContainerStarted","Data":"15ec93af9199004e1a29fd88407a18b01d1a7da85d00771997ffe3a26966ae6b"} Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.130954 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" event={"ID":"02bef69f-54ef-460f-aa22-3ac64259b621","Type":"ContainerStarted","Data":"67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c"} Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.133852 5023 generic.go:334] "Generic (PLEG): container finished" podID="948974f6-c39b-4658-a16c-9d76e6517e3f" containerID="8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8" exitCode=0 Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.133912 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.133906 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"948974f6-c39b-4658-a16c-9d76e6517e3f","Type":"ContainerDied","Data":"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8"} Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.134049 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"948974f6-c39b-4658-a16c-9d76e6517e3f","Type":"ContainerDied","Data":"e4f18f2e53eaf8602801a07059fe2b9d56aec2d80d80eb9408ba03f0d8b15605"} Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.134087 5023 scope.go:117] "RemoveContainer" containerID="8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.137457 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerStarted","Data":"6c85aecd782b8968e6171ea924b7928d9c46afaaa934e89addec5169b52d5f39"} Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.154358 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" podStartSLOduration=2.154339803 podStartE2EDuration="2.154339803s" podCreationTimestamp="2026-02-19 08:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:16.149894126 +0000 UTC m=+1473.807013074" watchObservedRunningTime="2026-02-19 08:25:16.154339803 +0000 UTC m=+1473.811458741" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.160169 5023 scope.go:117] "RemoveContainer" containerID="8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8" Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.160769 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8\": container with ID starting with 8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8 not found: ID does not exist" containerID="8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.160867 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8"} err="failed to get container status \"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8\": rpc error: code = NotFound desc = could not find container \"8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8\": container with ID starting with 8628e3324d429814ef9125ddec07135df61d6a6d630b65e28e6ee7669d8863b8 not found: ID does not exist" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.192103 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.199494 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.214248 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.214685 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948974f6-c39b-4658-a16c-9d76e6517e3f" containerName="memcached" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.214705 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="948974f6-c39b-4658-a16c-9d76e6517e3f" containerName="memcached" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.214892 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="948974f6-c39b-4658-a16c-9d76e6517e3f" containerName="memcached" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.215815 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.217994 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.218139 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-xp5s8" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.218481 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.230581 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.306814 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.306922 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kolla-config\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.306990 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.307042 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zq9\" (UniqueName: \"kubernetes.io/projected/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kube-api-access-55zq9\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.307144 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-config-data\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.336323 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.176:9322/\": dial tcp 10.217.0.176:9322: connect: connection refused" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.336395 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9322/\": dial tcp 10.217.0.176:9322: connect: connection refused" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.347667 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8kdl" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" probeResult="failure" output=< Feb 19 08:25:16 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:25:16 crc kubenswrapper[5023]: > Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.349196 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.350412 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.352314 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.352359 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerName="watcher-applier" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.408223 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kolla-config\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.408275 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.408303 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zq9\" (UniqueName: \"kubernetes.io/projected/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kube-api-access-55zq9\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.408357 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-config-data\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.408390 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.409525 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kolla-config\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.409808 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29422048-b3f8-4f11-a4d8-e633cb5d12b8-config-data\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.413109 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.413452 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29422048-b3f8-4f11-a4d8-e633cb5d12b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.432147 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zq9\" (UniqueName: \"kubernetes.io/projected/29422048-b3f8-4f11-a4d8-e633cb5d12b8-kube-api-access-55zq9\") pod \"memcached-0\" (UID: \"29422048-b3f8-4f11-a4d8-e633cb5d12b8\") " pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.486141 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.501270 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.504648 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:25:16 crc kubenswrapper[5023]: E0219 08:25:16.504689 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="2e314174-5790-4126-8add-b68dab9c52e3" containerName="watcher-decision-engine" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.531737 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.724643 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.813015 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.813141 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.813194 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkjk\" (UniqueName: \"kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.813281 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.814391 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.814471 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.814509 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs\") pod \"8cf47556-1366-4f5d-ba66-f336b02faa48\" (UID: \"8cf47556-1366-4f5d-ba66-f336b02faa48\") " Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.816135 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs" (OuterVolumeSpecName: "logs") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.819011 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk" (OuterVolumeSpecName: "kube-api-access-5gkjk") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "kube-api-access-5gkjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.862030 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.865574 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: W0219 08:25:16.875208 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29422048_b3f8_4f11_a4d8_e633cb5d12b8.slice/crio-7ae2429e8f9f17b54255e1db8f9e8c608f9aceb73b3db18446c8570bd59a3f66 WatchSource:0}: Error finding container 7ae2429e8f9f17b54255e1db8f9e8c608f9aceb73b3db18446c8570bd59a3f66: Status 404 returned error can't find the container with id 7ae2429e8f9f17b54255e1db8f9e8c608f9aceb73b3db18446c8570bd59a3f66 Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.887072 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.892801 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data" (OuterVolumeSpecName: "config-data") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.894731 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.905201 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8cf47556-1366-4f5d-ba66-f336b02faa48" (UID: "8cf47556-1366-4f5d-ba66-f336b02faa48"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916878 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916915 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkjk\" (UniqueName: \"kubernetes.io/projected/8cf47556-1366-4f5d-ba66-f336b02faa48-kube-api-access-5gkjk\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916926 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916939 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf47556-1366-4f5d-ba66-f336b02faa48-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916948 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916959 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:16 crc kubenswrapper[5023]: I0219 08:25:16.916968 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf47556-1366-4f5d-ba66-f336b02faa48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.150563 5023 generic.go:334] "Generic (PLEG): container finished" podID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerID="3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456" exitCode=0 Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.150670 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerDied","Data":"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456"} Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.150696 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8cf47556-1366-4f5d-ba66-f336b02faa48","Type":"ContainerDied","Data":"f5e292203a916d3f5d0050c63b0fa12b2485adef66e2ec5f48fb7a15eed09103"} Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.150715 5023 scope.go:117] "RemoveContainer" containerID="3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.152068 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.154259 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"29422048-b3f8-4f11-a4d8-e633cb5d12b8","Type":"ContainerStarted","Data":"98f5675e6cd476f43d7e246ee84cf71af730b7163a8776770b875edb60d8f8bd"} Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.154306 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"29422048-b3f8-4f11-a4d8-e633cb5d12b8","Type":"ContainerStarted","Data":"7ae2429e8f9f17b54255e1db8f9e8c608f9aceb73b3db18446c8570bd59a3f66"} Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.155198 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.159730 5023 generic.go:334] "Generic (PLEG): container finished" podID="42161976-e3ff-4d82-905c-99420be2c7c4" containerID="6c85aecd782b8968e6171ea924b7928d9c46afaaa934e89addec5169b52d5f39" exitCode=0 Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.159855 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerDied","Data":"6c85aecd782b8968e6171ea924b7928d9c46afaaa934e89addec5169b52d5f39"} Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.176116 5023 scope.go:117] "RemoveContainer" containerID="2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.204335 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=1.204305198 podStartE2EDuration="1.204305198s" podCreationTimestamp="2026-02-19 08:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:17.194430106 +0000 UTC m=+1474.851549064" watchObservedRunningTime="2026-02-19 08:25:17.204305198 +0000 UTC m=+1474.861424146" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.205797 5023 scope.go:117] "RemoveContainer" containerID="3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456" Feb 19 08:25:17 crc kubenswrapper[5023]: E0219 08:25:17.206846 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456\": container with ID starting with 3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456 not found: ID does not exist" containerID="3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.206911 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456"} err="failed to get container status \"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456\": rpc error: code = NotFound desc = could not find container \"3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456\": container with ID starting with 3bb2209d3b1e6183cf6a039e3ca567f65c86bf98ef4549ecb6787f4d7b50d456 not found: ID does not exist" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.206948 5023 scope.go:117] "RemoveContainer" containerID="2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb" Feb 19 08:25:17 crc kubenswrapper[5023]: E0219 08:25:17.210844 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb\": container with ID starting with 2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb not found: ID does not exist" containerID="2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.210903 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb"} err="failed to get container status \"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb\": rpc error: code = NotFound desc = could not find container \"2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb\": container with ID starting with 2d559f617b4ea1597b77d363d1160a7d93f8460f48ad59d8e746e072be44ddeb not found: ID does not exist" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.222747 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.272665 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.300833 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:17 crc kubenswrapper[5023]: E0219 08:25:17.301285 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-kuttl-api-log" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.301312 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-kuttl-api-log" Feb 19 08:25:17 crc kubenswrapper[5023]: E0219 08:25:17.301354 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-api" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.301364 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-api" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.301569 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-api" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.301601 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" containerName="watcher-kuttl-api-log" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.302737 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.307386 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.307709 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.307783 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.307953 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434165 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434210 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434231 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434374 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvw78\" (UniqueName: \"kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434501 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434552 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434751 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.434779 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.491334 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf47556-1366-4f5d-ba66-f336b02faa48" path="/var/lib/kubelet/pods/8cf47556-1366-4f5d-ba66-f336b02faa48/volumes" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.491969 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948974f6-c39b-4658-a16c-9d76e6517e3f" path="/var/lib/kubelet/pods/948974f6-c39b-4658-a16c-9d76e6517e3f/volumes" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.536126 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.537567 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.537727 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvw78\" (UniqueName: \"kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.537850 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.537939 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.538335 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.538426 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.538561 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.538995 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.541444 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.542590 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.549096 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.559250 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvw78\" (UniqueName: \"kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.560249 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.562583 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.563872 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:17 crc kubenswrapper[5023]: I0219 08:25:17.620826 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.110008 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:18 crc kubenswrapper[5023]: W0219 08:25:18.110901 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eaa73aa_2ad2_47a2_aa91_61af53abdaff.slice/crio-835062cf54a355d9e26ee3f85e89af5bcef854b4d69b04f7308315af53820cf2 WatchSource:0}: Error finding container 835062cf54a355d9e26ee3f85e89af5bcef854b4d69b04f7308315af53820cf2: Status 404 returned error can't find the container with id 835062cf54a355d9e26ee3f85e89af5bcef854b4d69b04f7308315af53820cf2 Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.176076 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerStarted","Data":"835062cf54a355d9e26ee3f85e89af5bcef854b4d69b04f7308315af53820cf2"} Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.187577 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerStarted","Data":"6d124649a7362f6135e24a0454ed10f29a63ba24ed82d1cd8370b89ab3a23642"} Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.213778 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x5z8b" podStartSLOduration=2.749657048 podStartE2EDuration="5.213753957s" podCreationTimestamp="2026-02-19 08:25:13 +0000 UTC" firstStartedPulling="2026-02-19 08:25:15.125787167 +0000 UTC m=+1472.782906115" lastFinishedPulling="2026-02-19 08:25:17.589884056 +0000 UTC m=+1475.247003024" observedRunningTime="2026-02-19 08:25:18.209124594 +0000 UTC m=+1475.866243542" watchObservedRunningTime="2026-02-19 08:25:18.213753957 +0000 UTC m=+1475.870872905" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.665249 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.763720 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle\") pod \"58acaab1-f2eb-4504-90db-42c824ac37f6\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.763797 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjmfc\" (UniqueName: \"kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc\") pod \"58acaab1-f2eb-4504-90db-42c824ac37f6\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.763836 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data\") pod \"58acaab1-f2eb-4504-90db-42c824ac37f6\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.764016 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs\") pod \"58acaab1-f2eb-4504-90db-42c824ac37f6\" (UID: \"58acaab1-f2eb-4504-90db-42c824ac37f6\") " Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.764878 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs" (OuterVolumeSpecName: "logs") pod "58acaab1-f2eb-4504-90db-42c824ac37f6" (UID: "58acaab1-f2eb-4504-90db-42c824ac37f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.768155 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc" (OuterVolumeSpecName: "kube-api-access-fjmfc") pod "58acaab1-f2eb-4504-90db-42c824ac37f6" (UID: "58acaab1-f2eb-4504-90db-42c824ac37f6"). InnerVolumeSpecName "kube-api-access-fjmfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.792956 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58acaab1-f2eb-4504-90db-42c824ac37f6" (UID: "58acaab1-f2eb-4504-90db-42c824ac37f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.812596 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data" (OuterVolumeSpecName: "config-data") pod "58acaab1-f2eb-4504-90db-42c824ac37f6" (UID: "58acaab1-f2eb-4504-90db-42c824ac37f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.866513 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58acaab1-f2eb-4504-90db-42c824ac37f6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.866563 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.866579 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjmfc\" (UniqueName: \"kubernetes.io/projected/58acaab1-f2eb-4504-90db-42c824ac37f6-kube-api-access-fjmfc\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:18 crc kubenswrapper[5023]: I0219 08:25:18.866592 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58acaab1-f2eb-4504-90db-42c824ac37f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.198553 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerStarted","Data":"4132ae6c3cc44a664223f75afe09203b68e2d8ae3868348f9c52bd1d42cdb001"} Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.198984 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerStarted","Data":"c91c3049b8b9e89ce2be9455ae69a5e15d8ef90dc8cb8f8bb53dbcceb03e548f"} Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.199016 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.201786 5023 generic.go:334] "Generic (PLEG): container finished" podID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" exitCode=0 Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.201838 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.201864 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"58acaab1-f2eb-4504-90db-42c824ac37f6","Type":"ContainerDied","Data":"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235"} Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.202004 5023 scope.go:117] "RemoveContainer" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.202169 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"58acaab1-f2eb-4504-90db-42c824ac37f6","Type":"ContainerDied","Data":"a5f8961a4a49539740d88f30fab5b938728d02838fbe2888db2e2010883cd726"} Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.203990 5023 generic.go:334] "Generic (PLEG): container finished" podID="02bef69f-54ef-460f-aa22-3ac64259b621" containerID="15ec93af9199004e1a29fd88407a18b01d1a7da85d00771997ffe3a26966ae6b" exitCode=0 Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.204050 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" event={"ID":"02bef69f-54ef-460f-aa22-3ac64259b621","Type":"ContainerDied","Data":"15ec93af9199004e1a29fd88407a18b01d1a7da85d00771997ffe3a26966ae6b"} Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.224874 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.22485847 podStartE2EDuration="2.22485847s" podCreationTimestamp="2026-02-19 08:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:19.218183343 +0000 UTC m=+1476.875302281" watchObservedRunningTime="2026-02-19 08:25:19.22485847 +0000 UTC m=+1476.881977418" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.232213 5023 scope.go:117] "RemoveContainer" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" Feb 19 08:25:19 crc kubenswrapper[5023]: E0219 08:25:19.232735 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235\": container with ID starting with db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235 not found: ID does not exist" containerID="db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.232790 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235"} err="failed to get container status \"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235\": rpc error: code = NotFound desc = could not find container \"db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235\": container with ID starting with db4b327cc03e388ce206bcba458afe32db45555be7a9c0fc270ee96d3d7d6235 not found: ID does not exist" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.258750 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.268086 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.277778 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:19 crc kubenswrapper[5023]: E0219 08:25:19.278201 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerName="watcher-applier" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.278222 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerName="watcher-applier" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.278419 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" containerName="watcher-applier" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.279096 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.282117 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.328532 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.373398 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.373501 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.373547 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.373562 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvh5\" (UniqueName: \"kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.373613 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.475546 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.475602 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.475649 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.475674 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvh5\" (UniqueName: \"kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.475714 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.481291 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.481931 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.484283 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.491147 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.492558 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58acaab1-f2eb-4504-90db-42c824ac37f6" path="/var/lib/kubelet/pods/58acaab1-f2eb-4504-90db-42c824ac37f6/volumes" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.503850 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvh5\" (UniqueName: \"kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5\") pod \"watcher-kuttl-applier-0\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:19 crc kubenswrapper[5023]: I0219 08:25:19.635509 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:20 crc kubenswrapper[5023]: I0219 08:25:20.129344 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:25:20 crc kubenswrapper[5023]: I0219 08:25:20.228085 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b94b892f-04c3-42ab-867e-65d9f5ffa0b1","Type":"ContainerStarted","Data":"89e1f773269a3e551b4c29875b5e91b388a3b9e96b502534eb1500952848ec47"} Feb 19 08:25:20 crc kubenswrapper[5023]: I0219 08:25:20.920829 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.006817 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtbcd\" (UniqueName: \"kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.006887 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.006941 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.007007 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.007042 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.007095 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.007116 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls\") pod \"02bef69f-54ef-460f-aa22-3ac64259b621\" (UID: \"02bef69f-54ef-460f-aa22-3ac64259b621\") " Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.013104 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.014424 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.016488 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd" (OuterVolumeSpecName: "kube-api-access-jtbcd") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "kube-api-access-jtbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.027472 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts" (OuterVolumeSpecName: "scripts") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.032102 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.042805 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data" (OuterVolumeSpecName: "config-data") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: E0219 08:25:21.076579 5023 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.153:56032->38.102.83.153:46331: write tcp 38.102.83.153:56032->38.102.83.153:46331: write: broken pipe Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.089759 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "02bef69f-54ef-460f-aa22-3ac64259b621" (UID: "02bef69f-54ef-460f-aa22-3ac64259b621"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108776 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108826 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108839 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtbcd\" (UniqueName: \"kubernetes.io/projected/02bef69f-54ef-460f-aa22-3ac64259b621-kube-api-access-jtbcd\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108851 5023 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108864 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108874 5023 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.108886 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02bef69f-54ef-460f-aa22-3ac64259b621-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.239900 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b94b892f-04c3-42ab-867e-65d9f5ffa0b1","Type":"ContainerStarted","Data":"89e09ec5bcc66319d9a90acafe989710b320a0455fbbd0e706baaa885e977f49"} Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.242899 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" event={"ID":"02bef69f-54ef-460f-aa22-3ac64259b621","Type":"ContainerDied","Data":"67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c"} Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.242946 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67cc5b49b99acde970ec972902d17b9034b40ccdf823a9ad9ee9f3423fd0922c" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.243012 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-zmtgl" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.269904 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.269878312 podStartE2EDuration="2.269878312s" podCreationTimestamp="2026-02-19 08:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:21.266141033 +0000 UTC m=+1478.923259981" watchObservedRunningTime="2026-02-19 08:25:21.269878312 +0000 UTC m=+1478.926997260" Feb 19 08:25:21 crc kubenswrapper[5023]: I0219 08:25:21.734979 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:22 crc kubenswrapper[5023]: I0219 08:25:22.621824 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:23 crc kubenswrapper[5023]: I0219 08:25:23.454480 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:23 crc kubenswrapper[5023]: I0219 08:25:23.454867 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:23 crc kubenswrapper[5023]: I0219 08:25:23.599747 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:24 crc kubenswrapper[5023]: I0219 08:25:24.321606 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:24 crc kubenswrapper[5023]: I0219 08:25:24.635994 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.309010 5023 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8kdl" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" probeResult="failure" output=< Feb 19 08:25:26 crc kubenswrapper[5023]: timeout: failed to connect service ":50051" within 1s Feb 19 08:25:26 crc kubenswrapper[5023]: > Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.534310 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.658044 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.681857 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-747f4cf75-wlbr2"] Feb 19 08:25:26 crc kubenswrapper[5023]: E0219 08:25:26.682217 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e314174-5790-4126-8add-b68dab9c52e3" containerName="watcher-decision-engine" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.682232 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e314174-5790-4126-8add-b68dab9c52e3" containerName="watcher-decision-engine" Feb 19 08:25:26 crc kubenswrapper[5023]: E0219 08:25:26.682255 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bef69f-54ef-460f-aa22-3ac64259b621" containerName="keystone-bootstrap" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.682261 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bef69f-54ef-460f-aa22-3ac64259b621" containerName="keystone-bootstrap" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.682406 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bef69f-54ef-460f-aa22-3ac64259b621" containerName="keystone-bootstrap" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.682426 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e314174-5790-4126-8add-b68dab9c52e3" containerName="watcher-decision-engine" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.682973 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.713791 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z44zc\" (UniqueName: \"kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc\") pod \"2e314174-5790-4126-8add-b68dab9c52e3\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.713850 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs\") pod \"2e314174-5790-4126-8add-b68dab9c52e3\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.713911 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data\") pod \"2e314174-5790-4126-8add-b68dab9c52e3\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.714226 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs" (OuterVolumeSpecName: "logs") pod "2e314174-5790-4126-8add-b68dab9c52e3" (UID: "2e314174-5790-4126-8add-b68dab9c52e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.714764 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle\") pod \"2e314174-5790-4126-8add-b68dab9c52e3\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.714798 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca\") pod \"2e314174-5790-4126-8add-b68dab9c52e3\" (UID: \"2e314174-5790-4126-8add-b68dab9c52e3\") " Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.715198 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e314174-5790-4126-8add-b68dab9c52e3-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.719075 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc" (OuterVolumeSpecName: "kube-api-access-z44zc") pod "2e314174-5790-4126-8add-b68dab9c52e3" (UID: "2e314174-5790-4126-8add-b68dab9c52e3"). InnerVolumeSpecName "kube-api-access-z44zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.725144 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-747f4cf75-wlbr2"] Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.758958 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e314174-5790-4126-8add-b68dab9c52e3" (UID: "2e314174-5790-4126-8add-b68dab9c52e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.765797 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2e314174-5790-4126-8add-b68dab9c52e3" (UID: "2e314174-5790-4126-8add-b68dab9c52e3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.774135 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data" (OuterVolumeSpecName: "config-data") pod "2e314174-5790-4126-8add-b68dab9c52e3" (UID: "2e314174-5790-4126-8add-b68dab9c52e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816166 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-config-data\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816212 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-scripts\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816233 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jrw\" (UniqueName: \"kubernetes.io/projected/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-kube-api-access-v9jrw\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816257 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-public-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816413 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-combined-ca-bundle\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816469 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-credential-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816493 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-cert-memcached-mtls\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816534 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-internal-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816697 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-fernet-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816793 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816807 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816818 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z44zc\" (UniqueName: \"kubernetes.io/projected/2e314174-5790-4126-8add-b68dab9c52e3-kube-api-access-z44zc\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.816828 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e314174-5790-4126-8add-b68dab9c52e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918240 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-combined-ca-bundle\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918287 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-credential-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918306 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-cert-memcached-mtls\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918333 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-internal-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918380 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-fernet-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918424 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-config-data\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918446 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-scripts\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918464 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jrw\" (UniqueName: \"kubernetes.io/projected/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-kube-api-access-v9jrw\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.918485 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-public-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.922380 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-internal-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.922441 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-public-tls-certs\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.922769 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-combined-ca-bundle\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.922921 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-cert-memcached-mtls\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.923152 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-credential-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.923269 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-fernet-keys\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.923653 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-config-data\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.925983 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-scripts\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.937259 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jrw\" (UniqueName: \"kubernetes.io/projected/dc6ddf02-3388-47f8-a46e-5528afaa1d4f-kube-api-access-v9jrw\") pod \"keystone-747f4cf75-wlbr2\" (UID: \"dc6ddf02-3388-47f8-a46e-5528afaa1d4f\") " pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:26 crc kubenswrapper[5023]: I0219 08:25:26.999231 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.141557 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.141986 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x5z8b" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="registry-server" containerID="cri-o://6d124649a7362f6135e24a0454ed10f29a63ba24ed82d1cd8370b89ab3a23642" gracePeriod=2 Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.300973 5023 generic.go:334] "Generic (PLEG): container finished" podID="42161976-e3ff-4d82-905c-99420be2c7c4" containerID="6d124649a7362f6135e24a0454ed10f29a63ba24ed82d1cd8370b89ab3a23642" exitCode=0 Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.301041 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerDied","Data":"6d124649a7362f6135e24a0454ed10f29a63ba24ed82d1cd8370b89ab3a23642"} Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.303748 5023 generic.go:334] "Generic (PLEG): container finished" podID="2e314174-5790-4126-8add-b68dab9c52e3" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" exitCode=0 Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.303773 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2e314174-5790-4126-8add-b68dab9c52e3","Type":"ContainerDied","Data":"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae"} Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.303811 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2e314174-5790-4126-8add-b68dab9c52e3","Type":"ContainerDied","Data":"6f8de8b90d7b37b04d206e4fe79588c04c45fdf6ac2e5672308316040a8f56d4"} Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.303830 5023 scope.go:117] "RemoveContainer" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.303982 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.339171 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.339328 5023 scope.go:117] "RemoveContainer" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" Feb 19 08:25:27 crc kubenswrapper[5023]: E0219 08:25:27.340081 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae\": container with ID starting with 59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae not found: ID does not exist" containerID="59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.340111 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae"} err="failed to get container status \"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae\": rpc error: code = NotFound desc = could not find container \"59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae\": container with ID starting with 59ebbb5b292ae82701fa592ac35c885e3f05b39f1056b76411e2c2fba61982ae not found: ID does not exist" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.348029 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.384847 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.392912 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.398329 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.403272 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429227 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429290 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429352 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwsd\" (UniqueName: \"kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429395 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429434 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.429495 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.498601 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e314174-5790-4126-8add-b68dab9c52e3" path="/var/lib/kubelet/pods/2e314174-5790-4126-8add-b68dab9c52e3/volumes" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.527753 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-747f4cf75-wlbr2"] Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.531950 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.532012 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.532080 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwsd\" (UniqueName: \"kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.532162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.532221 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.532267 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.536163 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.538214 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.545798 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.550351 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.555199 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.567137 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwsd\" (UniqueName: \"kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: W0219 08:25:27.569742 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6ddf02_3388_47f8_a46e_5528afaa1d4f.slice/crio-3ba453f18f4c7fa6309b60faed4a1db20a8d282f2fe169f6d065b85035dc64e4 WatchSource:0}: Error finding container 3ba453f18f4c7fa6309b60faed4a1db20a8d282f2fe169f6d065b85035dc64e4: Status 404 returned error can't find the container with id 3ba453f18f4c7fa6309b60faed4a1db20a8d282f2fe169f6d065b85035dc64e4 Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.625395 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.723029 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.761147 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.779732 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.850364 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities\") pod \"42161976-e3ff-4d82-905c-99420be2c7c4\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.850484 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content\") pod \"42161976-e3ff-4d82-905c-99420be2c7c4\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.850563 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hppv\" (UniqueName: \"kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv\") pod \"42161976-e3ff-4d82-905c-99420be2c7c4\" (UID: \"42161976-e3ff-4d82-905c-99420be2c7c4\") " Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.851137 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities" (OuterVolumeSpecName: "utilities") pod "42161976-e3ff-4d82-905c-99420be2c7c4" (UID: "42161976-e3ff-4d82-905c-99420be2c7c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.854216 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv" (OuterVolumeSpecName: "kube-api-access-5hppv") pod "42161976-e3ff-4d82-905c-99420be2c7c4" (UID: "42161976-e3ff-4d82-905c-99420be2c7c4"). InnerVolumeSpecName "kube-api-access-5hppv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.876754 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42161976-e3ff-4d82-905c-99420be2c7c4" (UID: "42161976-e3ff-4d82-905c-99420be2c7c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.953082 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hppv\" (UniqueName: \"kubernetes.io/projected/42161976-e3ff-4d82-905c-99420be2c7c4-kube-api-access-5hppv\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.953115 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:27 crc kubenswrapper[5023]: I0219 08:25:27.953127 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42161976-e3ff-4d82-905c-99420be2c7c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.255941 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:25:28 crc kubenswrapper[5023]: W0219 08:25:28.257551 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742301d1_d2cb_4e92_8bcc_5129532c4124.slice/crio-968e8a69145a7e319b19fc4c37e95a0645ea9df8c288f1c6e5ec54a39b47b39b WatchSource:0}: Error finding container 968e8a69145a7e319b19fc4c37e95a0645ea9df8c288f1c6e5ec54a39b47b39b: Status 404 returned error can't find the container with id 968e8a69145a7e319b19fc4c37e95a0645ea9df8c288f1c6e5ec54a39b47b39b Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.316057 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"742301d1-d2cb-4e92-8bcc-5129532c4124","Type":"ContainerStarted","Data":"968e8a69145a7e319b19fc4c37e95a0645ea9df8c288f1c6e5ec54a39b47b39b"} Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.317500 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" event={"ID":"dc6ddf02-3388-47f8-a46e-5528afaa1d4f","Type":"ContainerStarted","Data":"72043552477c6a6f1d62af74e4422ed465becb38e509168a1b7cf0267fe0548f"} Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.317536 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" event={"ID":"dc6ddf02-3388-47f8-a46e-5528afaa1d4f","Type":"ContainerStarted","Data":"3ba453f18f4c7fa6309b60faed4a1db20a8d282f2fe169f6d065b85035dc64e4"} Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.317800 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.337796 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x5z8b" event={"ID":"42161976-e3ff-4d82-905c-99420be2c7c4","Type":"ContainerDied","Data":"86576a505849606b70dd3a66e50aa0a8754ee22cca157c5bb305a7d52141837b"} Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.337844 5023 scope.go:117] "RemoveContainer" containerID="6d124649a7362f6135e24a0454ed10f29a63ba24ed82d1cd8370b89ab3a23642" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.337952 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x5z8b" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.366121 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" podStartSLOduration=2.366098596 podStartE2EDuration="2.366098596s" podCreationTimestamp="2026-02-19 08:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:28.345266913 +0000 UTC m=+1486.002385861" watchObservedRunningTime="2026-02-19 08:25:28.366098596 +0000 UTC m=+1486.023217544" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.369332 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.456063 5023 scope.go:117] "RemoveContainer" containerID="6c85aecd782b8968e6171ea924b7928d9c46afaaa934e89addec5169b52d5f39" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.487880 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.504155 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x5z8b"] Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.505738 5023 scope.go:117] "RemoveContainer" containerID="8322a27fcdc471f860ffd5d572205296d94d253d4d470ae68863e188c9b5bc65" Feb 19 08:25:28 crc kubenswrapper[5023]: I0219 08:25:28.791702 5023 scope.go:117] "RemoveContainer" containerID="75cb2f68365f7788ce99693e560fb0734677d2d45ef58b955c48f6366a6dd46b" Feb 19 08:25:29 crc kubenswrapper[5023]: I0219 08:25:29.361165 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"742301d1-d2cb-4e92-8bcc-5129532c4124","Type":"ContainerStarted","Data":"095ff9dbf3cf7837f52e4a1298b620e9010ba7102d9e3612a8305831b985a824"} Feb 19 08:25:29 crc kubenswrapper[5023]: I0219 08:25:29.380995 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.380977949 podStartE2EDuration="2.380977949s" podCreationTimestamp="2026-02-19 08:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:29.37497315 +0000 UTC m=+1487.032092098" watchObservedRunningTime="2026-02-19 08:25:29.380977949 +0000 UTC m=+1487.038096897" Feb 19 08:25:29 crc kubenswrapper[5023]: I0219 08:25:29.487238 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" path="/var/lib/kubelet/pods/42161976-e3ff-4d82-905c-99420be2c7c4/volumes" Feb 19 08:25:29 crc kubenswrapper[5023]: I0219 08:25:29.636666 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:29 crc kubenswrapper[5023]: I0219 08:25:29.669292 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:30 crc kubenswrapper[5023]: I0219 08:25:30.399998 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:25:32 crc kubenswrapper[5023]: I0219 08:25:32.653274 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:32 crc kubenswrapper[5023]: I0219 08:25:32.653866 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-kuttl-api-log" containerID="cri-o://c91c3049b8b9e89ce2be9455ae69a5e15d8ef90dc8cb8f8bb53dbcceb03e548f" gracePeriod=30 Feb 19 08:25:32 crc kubenswrapper[5023]: I0219 08:25:32.653901 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-api" containerID="cri-o://4132ae6c3cc44a664223f75afe09203b68e2d8ae3868348f9c52bd1d42cdb001" gracePeriod=30 Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.397298 5023 generic.go:334] "Generic (PLEG): container finished" podID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerID="4132ae6c3cc44a664223f75afe09203b68e2d8ae3868348f9c52bd1d42cdb001" exitCode=0 Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.397708 5023 generic.go:334] "Generic (PLEG): container finished" podID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerID="c91c3049b8b9e89ce2be9455ae69a5e15d8ef90dc8cb8f8bb53dbcceb03e548f" exitCode=143 Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.397366 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerDied","Data":"4132ae6c3cc44a664223f75afe09203b68e2d8ae3868348f9c52bd1d42cdb001"} Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.397845 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerDied","Data":"c91c3049b8b9e89ce2be9455ae69a5e15d8ef90dc8cb8f8bb53dbcceb03e548f"} Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.512350 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.652562 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvw78\" (UniqueName: \"kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.652987 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653039 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653088 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653141 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653205 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653248 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653340 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca\") pod \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\" (UID: \"5eaa73aa-2ad2-47a2-aa91-61af53abdaff\") " Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653677 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs" (OuterVolumeSpecName: "logs") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.653847 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.674233 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78" (OuterVolumeSpecName: "kube-api-access-wvw78") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "kube-api-access-wvw78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.677205 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.685700 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.704975 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.708310 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.709704 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data" (OuterVolumeSpecName: "config-data") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.725548 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5eaa73aa-2ad2-47a2-aa91-61af53abdaff" (UID: "5eaa73aa-2ad2-47a2-aa91-61af53abdaff"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755596 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvw78\" (UniqueName: \"kubernetes.io/projected/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-kube-api-access-wvw78\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755647 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755657 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755668 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755677 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755686 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:33 crc kubenswrapper[5023]: I0219 08:25:33.755695 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5eaa73aa-2ad2-47a2-aa91-61af53abdaff-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.408107 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5eaa73aa-2ad2-47a2-aa91-61af53abdaff","Type":"ContainerDied","Data":"835062cf54a355d9e26ee3f85e89af5bcef854b4d69b04f7308315af53820cf2"} Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.408171 5023 scope.go:117] "RemoveContainer" containerID="4132ae6c3cc44a664223f75afe09203b68e2d8ae3868348f9c52bd1d42cdb001" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.408170 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.435783 5023 scope.go:117] "RemoveContainer" containerID="c91c3049b8b9e89ce2be9455ae69a5e15d8ef90dc8cb8f8bb53dbcceb03e548f" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.448105 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.460231 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478244 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:34 crc kubenswrapper[5023]: E0219 08:25:34.478651 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="extract-utilities" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478670 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="extract-utilities" Feb 19 08:25:34 crc kubenswrapper[5023]: E0219 08:25:34.478707 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="registry-server" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478715 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="registry-server" Feb 19 08:25:34 crc kubenswrapper[5023]: E0219 08:25:34.478731 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-kuttl-api-log" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478738 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-kuttl-api-log" Feb 19 08:25:34 crc kubenswrapper[5023]: E0219 08:25:34.478750 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-api" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478758 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-api" Feb 19 08:25:34 crc kubenswrapper[5023]: E0219 08:25:34.478774 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="extract-content" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478781 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="extract-content" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478961 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-kuttl-api-log" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478979 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="42161976-e3ff-4d82-905c-99420be2c7c4" containerName="registry-server" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.478998 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" containerName="watcher-api" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.480036 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.482376 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.499960 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.568714 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.568788 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.568837 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.568861 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.568879 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhmn\" (UniqueName: \"kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.569063 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.669880 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.669929 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.669976 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.670021 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.670051 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.670069 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhmn\" (UniqueName: \"kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.670570 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.675298 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.675884 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.676597 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.677868 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.687842 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhmn\" (UniqueName: \"kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn\") pod \"watcher-kuttl-api-0\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:34 crc kubenswrapper[5023]: I0219 08:25:34.793331 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.208165 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.322996 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.363816 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.378206 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.428021 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerStarted","Data":"e8565794bbba9da50f4e9c3b6ca39929ecd5c26f0de6db535e63510489d53855"} Feb 19 08:25:35 crc kubenswrapper[5023]: I0219 08:25:35.496532 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eaa73aa-2ad2-47a2-aa91-61af53abdaff" path="/var/lib/kubelet/pods/5eaa73aa-2ad2-47a2-aa91-61af53abdaff/volumes" Feb 19 08:25:36 crc kubenswrapper[5023]: I0219 08:25:36.437171 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerStarted","Data":"762bfcbd96fceeca62c8e67fd2e49a095907c2fe047d9a32a80a015e95fbd257"} Feb 19 08:25:36 crc kubenswrapper[5023]: I0219 08:25:36.437224 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerStarted","Data":"573c9b7e4b58ea38ac9e8bfaabe3adc5150d2cc852001b66b847da7fcabc7939"} Feb 19 08:25:36 crc kubenswrapper[5023]: I0219 08:25:36.437548 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:37 crc kubenswrapper[5023]: I0219 08:25:37.724117 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:37 crc kubenswrapper[5023]: I0219 08:25:37.751046 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:37 crc kubenswrapper[5023]: I0219 08:25:37.776682 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.776656199 podStartE2EDuration="3.776656199s" podCreationTimestamp="2026-02-19 08:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:25:36.457163064 +0000 UTC m=+1494.114282022" watchObservedRunningTime="2026-02-19 08:25:37.776656199 +0000 UTC m=+1495.433775187" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.320808 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.321049 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8kdl" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" containerID="cri-o://73a63518502ca6d01df21dcdac84013cfcf1ba05dd61f42d97347159badf0567" gracePeriod=2 Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.456754 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerID="73a63518502ca6d01df21dcdac84013cfcf1ba05dd61f42d97347159badf0567" exitCode=0 Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.457312 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerDied","Data":"73a63518502ca6d01df21dcdac84013cfcf1ba05dd61f42d97347159badf0567"} Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.457375 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.497582 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.814853 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.818658 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.838075 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69m97\" (UniqueName: \"kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97\") pod \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.838240 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content\") pod \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.838311 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities\") pod \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\" (UID: \"dd598331-2c6c-4568-91e4-f8ee7e01fe3b\") " Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.839718 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities" (OuterVolumeSpecName: "utilities") pod "dd598331-2c6c-4568-91e4-f8ee7e01fe3b" (UID: "dd598331-2c6c-4568-91e4-f8ee7e01fe3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.845546 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97" (OuterVolumeSpecName: "kube-api-access-69m97") pod "dd598331-2c6c-4568-91e4-f8ee7e01fe3b" (UID: "dd598331-2c6c-4568-91e4-f8ee7e01fe3b"). InnerVolumeSpecName "kube-api-access-69m97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.945689 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.945727 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69m97\" (UniqueName: \"kubernetes.io/projected/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-kube-api-access-69m97\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:38 crc kubenswrapper[5023]: I0219 08:25:38.978401 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd598331-2c6c-4568-91e4-f8ee7e01fe3b" (UID: "dd598331-2c6c-4568-91e4-f8ee7e01fe3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.047521 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd598331-2c6c-4568-91e4-f8ee7e01fe3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.467696 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8kdl" event={"ID":"dd598331-2c6c-4568-91e4-f8ee7e01fe3b","Type":"ContainerDied","Data":"c9432bd19ead786b6a66e32b444ab9730cd2e7b3fe4deb496bd515196f7f9ae3"} Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.467777 5023 scope.go:117] "RemoveContainer" containerID="73a63518502ca6d01df21dcdac84013cfcf1ba05dd61f42d97347159badf0567" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.468266 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8kdl" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.496429 5023 scope.go:117] "RemoveContainer" containerID="506b7898e91bdfaf488d4cb140b05676657e104e7e163087f52d9f8b6c0a2690" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.518216 5023 scope.go:117] "RemoveContainer" containerID="d33ea2172fc6cd5cf84cce5c85f9f20b883b845b7385486a3e1f12f60c397801" Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.579540 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.590770 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8kdl"] Feb 19 08:25:39 crc kubenswrapper[5023]: I0219 08:25:39.794341 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:41 crc kubenswrapper[5023]: I0219 08:25:41.499054 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" path="/var/lib/kubelet/pods/dd598331-2c6c-4568-91e4-f8ee7e01fe3b/volumes" Feb 19 08:25:44 crc kubenswrapper[5023]: I0219 08:25:44.794999 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:44 crc kubenswrapper[5023]: I0219 08:25:44.801401 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:45 crc kubenswrapper[5023]: I0219 08:25:45.549366 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.334311 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:25:55 crc kubenswrapper[5023]: E0219 08:25:55.335177 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.335193 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" Feb 19 08:25:55 crc kubenswrapper[5023]: E0219 08:25:55.335212 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="extract-utilities" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.335221 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="extract-utilities" Feb 19 08:25:55 crc kubenswrapper[5023]: E0219 08:25:55.335241 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="extract-content" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.335249 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="extract-content" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.335440 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd598331-2c6c-4568-91e4-f8ee7e01fe3b" containerName="registry-server" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.336545 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.349653 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.448815 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.449137 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.449224 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkn4v\" (UniqueName: \"kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.550934 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.551247 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.551345 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkn4v\" (UniqueName: \"kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.551783 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.551788 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.574265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkn4v\" (UniqueName: \"kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v\") pod \"community-operators-gjjtf\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:55 crc kubenswrapper[5023]: I0219 08:25:55.678220 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:25:56 crc kubenswrapper[5023]: I0219 08:25:56.067116 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:25:56 crc kubenswrapper[5023]: I0219 08:25:56.635066 5023 generic.go:334] "Generic (PLEG): container finished" podID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerID="3fd8d5b22ab9140ea5e450b59737e28d2f6e623506bd828f0179da0483a0a4c9" exitCode=0 Feb 19 08:25:56 crc kubenswrapper[5023]: I0219 08:25:56.635120 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerDied","Data":"3fd8d5b22ab9140ea5e450b59737e28d2f6e623506bd828f0179da0483a0a4c9"} Feb 19 08:25:56 crc kubenswrapper[5023]: I0219 08:25:56.635212 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerStarted","Data":"5ebc9bd6f90185ab0d133116c8280d22d84b712456c8ece67476028d6b6d051a"} Feb 19 08:25:57 crc kubenswrapper[5023]: I0219 08:25:57.645279 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerStarted","Data":"a385fea7c16dd79c3a654ba2022769d4204c3981472977435e6d2121986c7fbb"} Feb 19 08:25:58 crc kubenswrapper[5023]: I0219 08:25:58.569613 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-747f4cf75-wlbr2" Feb 19 08:25:58 crc kubenswrapper[5023]: I0219 08:25:58.636258 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:25:58 crc kubenswrapper[5023]: I0219 08:25:58.636630 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" podUID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" containerName="keystone-api" containerID="cri-o://5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010" gracePeriod=30 Feb 19 08:25:58 crc kubenswrapper[5023]: I0219 08:25:58.665287 5023 generic.go:334] "Generic (PLEG): container finished" podID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerID="a385fea7c16dd79c3a654ba2022769d4204c3981472977435e6d2121986c7fbb" exitCode=0 Feb 19 08:25:58 crc kubenswrapper[5023]: I0219 08:25:58.665336 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerDied","Data":"a385fea7c16dd79c3a654ba2022769d4204c3981472977435e6d2121986c7fbb"} Feb 19 08:25:59 crc kubenswrapper[5023]: I0219 08:25:59.678203 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerStarted","Data":"bc80b7f3ea9fcb81381d8ce0c73bbe6889da42478e642c15b0a36799d4fe94c0"} Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.196441 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.229774 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjjtf" podStartSLOduration=4.763709912 podStartE2EDuration="7.229753279s" podCreationTimestamp="2026-02-19 08:25:55 +0000 UTC" firstStartedPulling="2026-02-19 08:25:56.636672534 +0000 UTC m=+1514.293791482" lastFinishedPulling="2026-02-19 08:25:59.102715901 +0000 UTC m=+1516.759834849" observedRunningTime="2026-02-19 08:25:59.716842457 +0000 UTC m=+1517.373961415" watchObservedRunningTime="2026-02-19 08:26:02.229753279 +0000 UTC m=+1519.886872247" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.358674 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.358767 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.358826 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.358885 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.359791 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5st8\" (UniqueName: \"kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.359938 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.360028 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.360082 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs\") pod \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\" (UID: \"3a18ac94-c6b6-40f7-bf4f-907dad15e61b\") " Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.365616 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.365953 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.367573 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8" (OuterVolumeSpecName: "kube-api-access-d5st8") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "kube-api-access-d5st8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.368297 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts" (OuterVolumeSpecName: "scripts") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.387159 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data" (OuterVolumeSpecName: "config-data") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.405373 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.407188 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.412232 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a18ac94-c6b6-40f7-bf4f-907dad15e61b" (UID: "3a18ac94-c6b6-40f7-bf4f-907dad15e61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.461937 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5st8\" (UniqueName: \"kubernetes.io/projected/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-kube-api-access-d5st8\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.461997 5023 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462009 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462017 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462027 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462036 5023 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462046 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.462053 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a18ac94-c6b6-40f7-bf4f-907dad15e61b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.705449 5023 generic.go:334] "Generic (PLEG): container finished" podID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" containerID="5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010" exitCode=0 Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.705507 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" event={"ID":"3a18ac94-c6b6-40f7-bf4f-907dad15e61b","Type":"ContainerDied","Data":"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010"} Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.705571 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" event={"ID":"3a18ac94-c6b6-40f7-bf4f-907dad15e61b","Type":"ContainerDied","Data":"9ee4f80e640355fae47439b79ef71e205fca29debb168a5cef741ae1a92ed731"} Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.705600 5023 scope.go:117] "RemoveContainer" containerID="5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.706476 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6cc7b947df-92tm2" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.734732 5023 scope.go:117] "RemoveContainer" containerID="5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010" Feb 19 08:26:02 crc kubenswrapper[5023]: E0219 08:26:02.735199 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010\": container with ID starting with 5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010 not found: ID does not exist" containerID="5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.735275 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010"} err="failed to get container status \"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010\": rpc error: code = NotFound desc = could not find container \"5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010\": container with ID starting with 5c594ddf5ff1f9f35e884cf396e3a657ca8449bdacc8e055c2f24f74f4727010 not found: ID does not exist" Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.757967 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:26:02 crc kubenswrapper[5023]: I0219 08:26:02.768443 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-6cc7b947df-92tm2"] Feb 19 08:26:03 crc kubenswrapper[5023]: I0219 08:26:03.488399 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" path="/var/lib/kubelet/pods/3a18ac94-c6b6-40f7-bf4f-907dad15e61b/volumes" Feb 19 08:26:05 crc kubenswrapper[5023]: I0219 08:26:05.679102 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:05 crc kubenswrapper[5023]: I0219 08:26:05.680645 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:05 crc kubenswrapper[5023]: I0219 08:26:05.740465 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.405096 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:06 crc kubenswrapper[5023]: E0219 08:26:06.405842 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" containerName="keystone-api" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.405867 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" containerName="keystone-api" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.406313 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18ac94-c6b6-40f7-bf4f-907dad15e61b" containerName="keystone-api" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.409174 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.415474 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.531510 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.531855 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77r9n\" (UniqueName: \"kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.531896 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.633686 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.633859 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.633941 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77r9n\" (UniqueName: \"kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.634253 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.634322 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.660874 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77r9n\" (UniqueName: \"kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n\") pod \"certified-operators-snsvb\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.733268 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.807183 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.881709 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.882514 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-central-agent" containerID="cri-o://a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a" gracePeriod=30 Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.882600 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="proxy-httpd" containerID="cri-o://e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b" gracePeriod=30 Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.882666 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-notification-agent" containerID="cri-o://69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53" gracePeriod=30 Feb 19 08:26:06 crc kubenswrapper[5023]: I0219 08:26:06.882652 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="sg-core" containerID="cri-o://0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5" gracePeriod=30 Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.086684 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.760065 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerID="b9f650bf481a968df8dbf01065d9a0e1d0d2211d2b520ca48766706ebb2f4bc3" exitCode=0 Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.760170 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerDied","Data":"b9f650bf481a968df8dbf01065d9a0e1d0d2211d2b520ca48766706ebb2f4bc3"} Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.761905 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerStarted","Data":"c0befb21fee2546a7025f400c2dfabfb7502bf929d951d860599da9d274a1093"} Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.768821 5023 generic.go:334] "Generic (PLEG): container finished" podID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerID="e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b" exitCode=0 Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.768860 5023 generic.go:334] "Generic (PLEG): container finished" podID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerID="0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5" exitCode=2 Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.768869 5023 generic.go:334] "Generic (PLEG): container finished" podID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerID="a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a" exitCode=0 Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.768929 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerDied","Data":"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b"} Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.769000 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerDied","Data":"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5"} Feb 19 08:26:07 crc kubenswrapper[5023]: I0219 08:26:07.769013 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerDied","Data":"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a"} Feb 19 08:26:08 crc kubenswrapper[5023]: I0219 08:26:08.778840 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerStarted","Data":"b633e039168a20394e5eca41aa75709e087f87e570029391c800297d03fc37cd"} Feb 19 08:26:09 crc kubenswrapper[5023]: I0219 08:26:09.790115 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerID="b633e039168a20394e5eca41aa75709e087f87e570029391c800297d03fc37cd" exitCode=0 Feb 19 08:26:09 crc kubenswrapper[5023]: I0219 08:26:09.790326 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerDied","Data":"b633e039168a20394e5eca41aa75709e087f87e570029391c800297d03fc37cd"} Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.531424 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.532078 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjjtf" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="registry-server" containerID="cri-o://bc80b7f3ea9fcb81381d8ce0c73bbe6889da42478e642c15b0a36799d4fe94c0" gracePeriod=2 Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.819008 5023 generic.go:334] "Generic (PLEG): container finished" podID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerID="bc80b7f3ea9fcb81381d8ce0c73bbe6889da42478e642c15b0a36799d4fe94c0" exitCode=0 Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.819094 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerDied","Data":"bc80b7f3ea9fcb81381d8ce0c73bbe6889da42478e642c15b0a36799d4fe94c0"} Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.826462 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerStarted","Data":"b89cf6e2af3a133b16697f81af6ae6acc2ca74c58da79dbdd49c3548bc809284"} Feb 19 08:26:10 crc kubenswrapper[5023]: I0219 08:26:10.852797 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snsvb" podStartSLOduration=2.425329661 podStartE2EDuration="4.852774334s" podCreationTimestamp="2026-02-19 08:26:06 +0000 UTC" firstStartedPulling="2026-02-19 08:26:07.761340681 +0000 UTC m=+1525.418459629" lastFinishedPulling="2026-02-19 08:26:10.188785354 +0000 UTC m=+1527.845904302" observedRunningTime="2026-02-19 08:26:10.844759932 +0000 UTC m=+1528.501878900" watchObservedRunningTime="2026-02-19 08:26:10.852774334 +0000 UTC m=+1528.509893282" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.073958 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.210605 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities\") pod \"87b86935-f98c-4199-bf4a-3d2609e690a7\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.211318 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkn4v\" (UniqueName: \"kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v\") pod \"87b86935-f98c-4199-bf4a-3d2609e690a7\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.211396 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content\") pod \"87b86935-f98c-4199-bf4a-3d2609e690a7\" (UID: \"87b86935-f98c-4199-bf4a-3d2609e690a7\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.211479 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities" (OuterVolumeSpecName: "utilities") pod "87b86935-f98c-4199-bf4a-3d2609e690a7" (UID: "87b86935-f98c-4199-bf4a-3d2609e690a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.212358 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.228297 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v" (OuterVolumeSpecName: "kube-api-access-lkn4v") pod "87b86935-f98c-4199-bf4a-3d2609e690a7" (UID: "87b86935-f98c-4199-bf4a-3d2609e690a7"). InnerVolumeSpecName "kube-api-access-lkn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.267702 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b86935-f98c-4199-bf4a-3d2609e690a7" (UID: "87b86935-f98c-4199-bf4a-3d2609e690a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.313859 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b86935-f98c-4199-bf4a-3d2609e690a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.313901 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkn4v\" (UniqueName: \"kubernetes.io/projected/87b86935-f98c-4199-bf4a-3d2609e690a7-kube-api-access-lkn4v\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.573511 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721676 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721741 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721773 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721803 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jhhj\" (UniqueName: \"kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721839 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721860 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721904 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.721993 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle\") pod \"05966302-ca1a-4ac5-a1a3-fa36220e8452\" (UID: \"05966302-ca1a-4ac5-a1a3-fa36220e8452\") " Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.722438 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.722553 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.722843 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.722865 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05966302-ca1a-4ac5-a1a3-fa36220e8452-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.741253 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts" (OuterVolumeSpecName: "scripts") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.741403 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj" (OuterVolumeSpecName: "kube-api-access-7jhhj") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "kube-api-access-7jhhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.748247 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.802179 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.816761 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.825027 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.825074 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.825091 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jhhj\" (UniqueName: \"kubernetes.io/projected/05966302-ca1a-4ac5-a1a3-fa36220e8452-kube-api-access-7jhhj\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.825107 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.825121 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837398 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data" (OuterVolumeSpecName: "config-data") pod "05966302-ca1a-4ac5-a1a3-fa36220e8452" (UID: "05966302-ca1a-4ac5-a1a3-fa36220e8452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837540 5023 generic.go:334] "Generic (PLEG): container finished" podID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerID="69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53" exitCode=0 Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837585 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerDied","Data":"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53"} Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837649 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837667 5023 scope.go:117] "RemoveContainer" containerID="e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.837653 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"05966302-ca1a-4ac5-a1a3-fa36220e8452","Type":"ContainerDied","Data":"9ba1a4c0efe2e920ed38227ba938d02023c4ba6fff775b48a17721c77f7d14ff"} Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.841063 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjjtf" event={"ID":"87b86935-f98c-4199-bf4a-3d2609e690a7","Type":"ContainerDied","Data":"5ebc9bd6f90185ab0d133116c8280d22d84b712456c8ece67476028d6b6d051a"} Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.841145 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjjtf" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.862461 5023 scope.go:117] "RemoveContainer" containerID="0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.872868 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.881420 5023 scope.go:117] "RemoveContainer" containerID="69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.882122 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjjtf"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.893406 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.900266 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.902794 5023 scope.go:117] "RemoveContainer" containerID="a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.923080 5023 scope.go:117] "RemoveContainer" containerID="e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.923528 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b\": container with ID starting with e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b not found: ID does not exist" containerID="e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.923555 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b"} err="failed to get container status \"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b\": rpc error: code = NotFound desc = could not find container \"e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b\": container with ID starting with e131a9f096fa7eb4ddf5bd0a84e89e83334b78395a60474febb893c6d050484b not found: ID does not exist" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.923576 5023 scope.go:117] "RemoveContainer" containerID="0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.923965 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5\": container with ID starting with 0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5 not found: ID does not exist" containerID="0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.923985 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5"} err="failed to get container status \"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5\": rpc error: code = NotFound desc = could not find container \"0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5\": container with ID starting with 0dd7d10c12ebb23b0e733064a4337ae3d92ab0cab97e181554f0be854e475fc5 not found: ID does not exist" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.923997 5023 scope.go:117] "RemoveContainer" containerID="69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.924219 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53\": container with ID starting with 69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53 not found: ID does not exist" containerID="69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.924242 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53"} err="failed to get container status \"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53\": rpc error: code = NotFound desc = could not find container \"69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53\": container with ID starting with 69323d344e8c17a9d1ac90c231528fd515a0c12332eb664553cac0d2ae427f53 not found: ID does not exist" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.924261 5023 scope.go:117] "RemoveContainer" containerID="a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.924476 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a\": container with ID starting with a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a not found: ID does not exist" containerID="a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.924498 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a"} err="failed to get container status \"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a\": rpc error: code = NotFound desc = could not find container \"a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a\": container with ID starting with a260dd15af8f2debf2f85579d1bc2c15cf448d7ffa8c5024430826fbc3c60d5a not found: ID does not exist" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.924514 5023 scope.go:117] "RemoveContainer" containerID="bc80b7f3ea9fcb81381d8ce0c73bbe6889da42478e642c15b0a36799d4fe94c0" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.926046 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05966302-ca1a-4ac5-a1a3-fa36220e8452-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.943456 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.945174 5023 scope.go:117] "RemoveContainer" containerID="a385fea7c16dd79c3a654ba2022769d4204c3981472977435e6d2121986c7fbb" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946415 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-notification-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946437 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-notification-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946452 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="extract-utilities" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946461 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="extract-utilities" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946472 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="extract-content" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946480 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="extract-content" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946495 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="registry-server" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946500 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="registry-server" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946521 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="sg-core" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946529 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="sg-core" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946548 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="proxy-httpd" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946773 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="proxy-httpd" Feb 19 08:26:11 crc kubenswrapper[5023]: E0219 08:26:11.946795 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-central-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946805 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-central-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946956 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-notification-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946981 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="proxy-httpd" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946989 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="sg-core" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.946999 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" containerName="registry-server" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.947012 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" containerName="ceilometer-central-agent" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.948817 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.951097 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.952206 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.952329 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.959809 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:11 crc kubenswrapper[5023]: I0219 08:26:11.965959 5023 scope.go:117] "RemoveContainer" containerID="3fd8d5b22ab9140ea5e450b59737e28d2f6e623506bd828f0179da0483a0a4c9" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028487 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028529 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028554 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8sd\" (UniqueName: \"kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028588 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028659 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028686 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028712 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.028741 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130651 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb8sd\" (UniqueName: \"kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130717 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130775 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130796 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130819 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130840 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130893 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.130910 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.131298 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.131379 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.135757 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.135989 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.136244 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.136519 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.140328 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.150444 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb8sd\" (UniqueName: \"kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd\") pod \"ceilometer-0\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.267863 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.724221 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.742633 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:26:12 crc kubenswrapper[5023]: I0219 08:26:12.854476 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerStarted","Data":"fd936690abbfe2ce12d6a56ce426730d92c7218fb52760e84c0405d71f958d97"} Feb 19 08:26:13 crc kubenswrapper[5023]: I0219 08:26:13.485089 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05966302-ca1a-4ac5-a1a3-fa36220e8452" path="/var/lib/kubelet/pods/05966302-ca1a-4ac5-a1a3-fa36220e8452/volumes" Feb 19 08:26:13 crc kubenswrapper[5023]: I0219 08:26:13.486326 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b86935-f98c-4199-bf4a-3d2609e690a7" path="/var/lib/kubelet/pods/87b86935-f98c-4199-bf4a-3d2609e690a7/volumes" Feb 19 08:26:13 crc kubenswrapper[5023]: I0219 08:26:13.875854 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerStarted","Data":"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f"} Feb 19 08:26:14 crc kubenswrapper[5023]: I0219 08:26:14.886395 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerStarted","Data":"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe"} Feb 19 08:26:14 crc kubenswrapper[5023]: I0219 08:26:14.886791 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerStarted","Data":"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f"} Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.733908 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.734332 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.781463 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.905727 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerStarted","Data":"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95"} Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.934103 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.337003134 podStartE2EDuration="5.934083773s" podCreationTimestamp="2026-02-19 08:26:11 +0000 UTC" firstStartedPulling="2026-02-19 08:26:12.742424867 +0000 UTC m=+1530.399543815" lastFinishedPulling="2026-02-19 08:26:16.339505506 +0000 UTC m=+1533.996624454" observedRunningTime="2026-02-19 08:26:16.932545062 +0000 UTC m=+1534.589664020" watchObservedRunningTime="2026-02-19 08:26:16.934083773 +0000 UTC m=+1534.591202721" Feb 19 08:26:16 crc kubenswrapper[5023]: I0219 08:26:16.957756 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:17 crc kubenswrapper[5023]: I0219 08:26:17.913004 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:19 crc kubenswrapper[5023]: I0219 08:26:19.130057 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:19 crc kubenswrapper[5023]: I0219 08:26:19.130437 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-snsvb" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="registry-server" containerID="cri-o://b89cf6e2af3a133b16697f81af6ae6acc2ca74c58da79dbdd49c3548bc809284" gracePeriod=2 Feb 19 08:26:19 crc kubenswrapper[5023]: I0219 08:26:19.935089 5023 generic.go:334] "Generic (PLEG): container finished" podID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerID="b89cf6e2af3a133b16697f81af6ae6acc2ca74c58da79dbdd49c3548bc809284" exitCode=0 Feb 19 08:26:19 crc kubenswrapper[5023]: I0219 08:26:19.935547 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerDied","Data":"b89cf6e2af3a133b16697f81af6ae6acc2ca74c58da79dbdd49c3548bc809284"} Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.190943 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.281703 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77r9n\" (UniqueName: \"kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n\") pod \"c4913879-18e7-425d-81c2-4b80fad9f0e9\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.281838 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities\") pod \"c4913879-18e7-425d-81c2-4b80fad9f0e9\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.281989 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content\") pod \"c4913879-18e7-425d-81c2-4b80fad9f0e9\" (UID: \"c4913879-18e7-425d-81c2-4b80fad9f0e9\") " Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.282799 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities" (OuterVolumeSpecName: "utilities") pod "c4913879-18e7-425d-81c2-4b80fad9f0e9" (UID: "c4913879-18e7-425d-81c2-4b80fad9f0e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.288303 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n" (OuterVolumeSpecName: "kube-api-access-77r9n") pod "c4913879-18e7-425d-81c2-4b80fad9f0e9" (UID: "c4913879-18e7-425d-81c2-4b80fad9f0e9"). InnerVolumeSpecName "kube-api-access-77r9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.331984 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4913879-18e7-425d-81c2-4b80fad9f0e9" (UID: "c4913879-18e7-425d-81c2-4b80fad9f0e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.383821 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.383861 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4913879-18e7-425d-81c2-4b80fad9f0e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.383875 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77r9n\" (UniqueName: \"kubernetes.io/projected/c4913879-18e7-425d-81c2-4b80fad9f0e9-kube-api-access-77r9n\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.945869 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snsvb" event={"ID":"c4913879-18e7-425d-81c2-4b80fad9f0e9","Type":"ContainerDied","Data":"c0befb21fee2546a7025f400c2dfabfb7502bf929d951d860599da9d274a1093"} Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.945920 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snsvb" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.946227 5023 scope.go:117] "RemoveContainer" containerID="b89cf6e2af3a133b16697f81af6ae6acc2ca74c58da79dbdd49c3548bc809284" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.980128 5023 scope.go:117] "RemoveContainer" containerID="b633e039168a20394e5eca41aa75709e087f87e570029391c800297d03fc37cd" Feb 19 08:26:20 crc kubenswrapper[5023]: I0219 08:26:20.997026 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:21 crc kubenswrapper[5023]: I0219 08:26:21.005686 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-snsvb"] Feb 19 08:26:21 crc kubenswrapper[5023]: I0219 08:26:21.010851 5023 scope.go:117] "RemoveContainer" containerID="b9f650bf481a968df8dbf01065d9a0e1d0d2211d2b520ca48766706ebb2f4bc3" Feb 19 08:26:21 crc kubenswrapper[5023]: I0219 08:26:21.487378 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" path="/var/lib/kubelet/pods/c4913879-18e7-425d-81c2-4b80fad9f0e9/volumes" Feb 19 08:26:29 crc kubenswrapper[5023]: I0219 08:26:29.017432 5023 scope.go:117] "RemoveContainer" containerID="e2090a2fe9cd695f306d0ca2b7f4ac7fcb7f16d4927f7809a6eb669cde1890ea" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.163154 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.169836 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pqbrr"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.191308 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherb45c-account-delete-jdhgk"] Feb 19 08:26:40 crc kubenswrapper[5023]: E0219 08:26:40.191773 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="extract-content" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.191792 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="extract-content" Feb 19 08:26:40 crc kubenswrapper[5023]: E0219 08:26:40.191830 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="extract-utilities" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.191838 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="extract-utilities" Feb 19 08:26:40 crc kubenswrapper[5023]: E0219 08:26:40.191846 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="registry-server" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.191854 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="registry-server" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.192045 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4913879-18e7-425d-81c2-4b80fad9f0e9" containerName="registry-server" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.192819 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.202938 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb45c-account-delete-jdhgk"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.258725 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.259324 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="742301d1-d2cb-4e92-8bcc-5129532c4124" containerName="watcher-decision-engine" containerID="cri-o://095ff9dbf3cf7837f52e4a1298b620e9010ba7102d9e3612a8305831b985a824" gracePeriod=30 Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.273102 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.273317 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" containerName="watcher-applier" containerID="cri-o://89e09ec5bcc66319d9a90acafe989710b320a0455fbbd0e706baaa885e977f49" gracePeriod=30 Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.297151 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xpd\" (UniqueName: \"kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.297413 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.329577 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.330580 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-api" containerID="cri-o://573c9b7e4b58ea38ac9e8bfaabe3adc5150d2cc852001b66b847da7fcabc7939" gracePeriod=30 Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.330957 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-kuttl-api-log" containerID="cri-o://762bfcbd96fceeca62c8e67fd2e49a095907c2fe047d9a32a80a015e95fbd257" gracePeriod=30 Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.399403 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.399764 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xpd\" (UniqueName: \"kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.400785 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.430138 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xpd\" (UniqueName: \"kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd\") pod \"watcherb45c-account-delete-jdhgk\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:40 crc kubenswrapper[5023]: I0219 08:26:40.510131 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:41 crc kubenswrapper[5023]: I0219 08:26:41.026952 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb45c-account-delete-jdhgk"] Feb 19 08:26:41 crc kubenswrapper[5023]: I0219 08:26:41.103692 5023 generic.go:334] "Generic (PLEG): container finished" podID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerID="762bfcbd96fceeca62c8e67fd2e49a095907c2fe047d9a32a80a015e95fbd257" exitCode=143 Feb 19 08:26:41 crc kubenswrapper[5023]: I0219 08:26:41.103752 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerDied","Data":"762bfcbd96fceeca62c8e67fd2e49a095907c2fe047d9a32a80a015e95fbd257"} Feb 19 08:26:41 crc kubenswrapper[5023]: I0219 08:26:41.105267 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" event={"ID":"db665e18-1785-4fa2-8477-c1710eac0146","Type":"ContainerStarted","Data":"97747c5bbb3c43f0c736d7a3f6f73c9803bb63f7e78d44e5f07548923e6027a1"} Feb 19 08:26:41 crc kubenswrapper[5023]: I0219 08:26:41.497091 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1636bd3e-1de3-4efb-addd-1bd9c65ad48b" path="/var/lib/kubelet/pods/1636bd3e-1de3-4efb-addd-1bd9c65ad48b/volumes" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.116307 5023 generic.go:334] "Generic (PLEG): container finished" podID="db665e18-1785-4fa2-8477-c1710eac0146" containerID="ebb046dc0b1d4244ced83182e8ee78a3ee4c594f8b94f6cfba1d6ba5e9822b78" exitCode=0 Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.116751 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" event={"ID":"db665e18-1785-4fa2-8477-c1710eac0146","Type":"ContainerDied","Data":"ebb046dc0b1d4244ced83182e8ee78a3ee4c594f8b94f6cfba1d6ba5e9822b78"} Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.118799 5023 generic.go:334] "Generic (PLEG): container finished" podID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerID="573c9b7e4b58ea38ac9e8bfaabe3adc5150d2cc852001b66b847da7fcabc7939" exitCode=0 Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.118850 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerDied","Data":"573c9b7e4b58ea38ac9e8bfaabe3adc5150d2cc852001b66b847da7fcabc7939"} Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.289705 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.509675 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649562 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649742 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649772 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649810 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649831 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhmn\" (UniqueName: \"kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.649872 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls\") pod \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\" (UID: \"e0f82a63-79a3-4fe0-b51b-41c32a781fa9\") " Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.650973 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs" (OuterVolumeSpecName: "logs") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.668144 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn" (OuterVolumeSpecName: "kube-api-access-hwhmn") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "kube-api-access-hwhmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.706319 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.711337 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.731564 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data" (OuterVolumeSpecName: "config-data") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.740083 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e0f82a63-79a3-4fe0-b51b-41c32a781fa9" (UID: "e0f82a63-79a3-4fe0-b51b-41c32a781fa9"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752089 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752126 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752137 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhmn\" (UniqueName: \"kubernetes.io/projected/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-kube-api-access-hwhmn\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752148 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752157 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:42 crc kubenswrapper[5023]: I0219 08:26:42.752165 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f82a63-79a3-4fe0-b51b-41c32a781fa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.129516 5023 generic.go:334] "Generic (PLEG): container finished" podID="742301d1-d2cb-4e92-8bcc-5129532c4124" containerID="095ff9dbf3cf7837f52e4a1298b620e9010ba7102d9e3612a8305831b985a824" exitCode=0 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.129585 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"742301d1-d2cb-4e92-8bcc-5129532c4124","Type":"ContainerDied","Data":"095ff9dbf3cf7837f52e4a1298b620e9010ba7102d9e3612a8305831b985a824"} Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.131847 5023 generic.go:334] "Generic (PLEG): container finished" podID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" containerID="89e09ec5bcc66319d9a90acafe989710b320a0455fbbd0e706baaa885e977f49" exitCode=0 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.131904 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b94b892f-04c3-42ab-867e-65d9f5ffa0b1","Type":"ContainerDied","Data":"89e09ec5bcc66319d9a90acafe989710b320a0455fbbd0e706baaa885e977f49"} Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.135248 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.135536 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e0f82a63-79a3-4fe0-b51b-41c32a781fa9","Type":"ContainerDied","Data":"e8565794bbba9da50f4e9c3b6ca39929ecd5c26f0de6db535e63510489d53855"} Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.135597 5023 scope.go:117] "RemoveContainer" containerID="573c9b7e4b58ea38ac9e8bfaabe3adc5150d2cc852001b66b847da7fcabc7939" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.193386 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.197780 5023 scope.go:117] "RemoveContainer" containerID="762bfcbd96fceeca62c8e67fd2e49a095907c2fe047d9a32a80a015e95fbd257" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.198442 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.202509 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.259493 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.259558 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwsd\" (UniqueName: \"kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.260224 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.260288 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.260576 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs" (OuterVolumeSpecName: "logs") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.260747 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.261287 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle\") pod \"742301d1-d2cb-4e92-8bcc-5129532c4124\" (UID: \"742301d1-d2cb-4e92-8bcc-5129532c4124\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.261667 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742301d1-d2cb-4e92-8bcc-5129532c4124-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.282896 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd" (OuterVolumeSpecName: "kube-api-access-djwsd") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "kube-api-access-djwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.297384 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.305042 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.356768 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data" (OuterVolumeSpecName: "config-data") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.361965 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "742301d1-d2cb-4e92-8bcc-5129532c4124" (UID: "742301d1-d2cb-4e92-8bcc-5129532c4124"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.363664 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.363711 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwsd\" (UniqueName: \"kubernetes.io/projected/742301d1-d2cb-4e92-8bcc-5129532c4124-kube-api-access-djwsd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.363727 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.363738 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.363749 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742301d1-d2cb-4e92-8bcc-5129532c4124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.512355 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" path="/var/lib/kubelet/pods/e0f82a63-79a3-4fe0-b51b-41c32a781fa9/volumes" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.595109 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.595529 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-central-agent" containerID="cri-o://771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f" gracePeriod=30 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.596219 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="proxy-httpd" containerID="cri-o://def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95" gracePeriod=30 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.596310 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="sg-core" containerID="cri-o://a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe" gracePeriod=30 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.596387 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-notification-agent" containerID="cri-o://95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f" gracePeriod=30 Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.620379 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.688080 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769711 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls\") pod \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769774 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs\") pod \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769823 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98xpd\" (UniqueName: \"kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd\") pod \"db665e18-1785-4fa2-8477-c1710eac0146\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769845 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data\") pod \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769879 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tvh5\" (UniqueName: \"kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5\") pod \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.769968 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle\") pod \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\" (UID: \"b94b892f-04c3-42ab-867e-65d9f5ffa0b1\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.770018 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts\") pod \"db665e18-1785-4fa2-8477-c1710eac0146\" (UID: \"db665e18-1785-4fa2-8477-c1710eac0146\") " Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.770206 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs" (OuterVolumeSpecName: "logs") pod "b94b892f-04c3-42ab-867e-65d9f5ffa0b1" (UID: "b94b892f-04c3-42ab-867e-65d9f5ffa0b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.770599 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db665e18-1785-4fa2-8477-c1710eac0146" (UID: "db665e18-1785-4fa2-8477-c1710eac0146"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.770706 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.774054 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd" (OuterVolumeSpecName: "kube-api-access-98xpd") pod "db665e18-1785-4fa2-8477-c1710eac0146" (UID: "db665e18-1785-4fa2-8477-c1710eac0146"). InnerVolumeSpecName "kube-api-access-98xpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.774088 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5" (OuterVolumeSpecName: "kube-api-access-4tvh5") pod "b94b892f-04c3-42ab-867e-65d9f5ffa0b1" (UID: "b94b892f-04c3-42ab-867e-65d9f5ffa0b1"). InnerVolumeSpecName "kube-api-access-4tvh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.797848 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b94b892f-04c3-42ab-867e-65d9f5ffa0b1" (UID: "b94b892f-04c3-42ab-867e-65d9f5ffa0b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.828726 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data" (OuterVolumeSpecName: "config-data") pod "b94b892f-04c3-42ab-867e-65d9f5ffa0b1" (UID: "b94b892f-04c3-42ab-867e-65d9f5ffa0b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.838441 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b94b892f-04c3-42ab-867e-65d9f5ffa0b1" (UID: "b94b892f-04c3-42ab-867e-65d9f5ffa0b1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872423 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872520 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98xpd\" (UniqueName: \"kubernetes.io/projected/db665e18-1785-4fa2-8477-c1710eac0146-kube-api-access-98xpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872537 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872548 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tvh5\" (UniqueName: \"kubernetes.io/projected/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-kube-api-access-4tvh5\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872558 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94b892f-04c3-42ab-867e-65d9f5ffa0b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:43 crc kubenswrapper[5023]: I0219 08:26:43.872568 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db665e18-1785-4fa2-8477-c1710eac0146-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.144119 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" event={"ID":"db665e18-1785-4fa2-8477-c1710eac0146","Type":"ContainerDied","Data":"97747c5bbb3c43f0c736d7a3f6f73c9803bb63f7e78d44e5f07548923e6027a1"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.144187 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97747c5bbb3c43f0c736d7a3f6f73c9803bb63f7e78d44e5f07548923e6027a1" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.144147 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb45c-account-delete-jdhgk" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.146121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"742301d1-d2cb-4e92-8bcc-5129532c4124","Type":"ContainerDied","Data":"968e8a69145a7e319b19fc4c37e95a0645ea9df8c288f1c6e5ec54a39b47b39b"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.146200 5023 scope.go:117] "RemoveContainer" containerID="095ff9dbf3cf7837f52e4a1298b620e9010ba7102d9e3612a8305831b985a824" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.146134 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.148372 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b94b892f-04c3-42ab-867e-65d9f5ffa0b1","Type":"ContainerDied","Data":"89e1f773269a3e551b4c29875b5e91b388a3b9e96b502534eb1500952848ec47"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.148424 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154488 5023 generic.go:334] "Generic (PLEG): container finished" podID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerID="def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95" exitCode=0 Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154531 5023 generic.go:334] "Generic (PLEG): container finished" podID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerID="a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe" exitCode=2 Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154543 5023 generic.go:334] "Generic (PLEG): container finished" podID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerID="771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f" exitCode=0 Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154565 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerDied","Data":"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154589 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerDied","Data":"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.154601 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerDied","Data":"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f"} Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.174834 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.178897 5023 scope.go:117] "RemoveContainer" containerID="89e09ec5bcc66319d9a90acafe989710b320a0455fbbd0e706baaa885e977f49" Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.182325 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.200156 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:26:44 crc kubenswrapper[5023]: I0219 08:26:44.207147 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.257229 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tlxrw"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.312530 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherb45c-account-delete-jdhgk"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.324231 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tlxrw"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.337878 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.353689 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherb45c-account-delete-jdhgk"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.359681 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-b45c-account-create-update-6vdb7"] Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.487878 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742301d1-d2cb-4e92-8bcc-5129532c4124" path="/var/lib/kubelet/pods/742301d1-d2cb-4e92-8bcc-5129532c4124/volumes" Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.488718 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" path="/var/lib/kubelet/pods/b94b892f-04c3-42ab-867e-65d9f5ffa0b1/volumes" Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.489245 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da273fe4-2d94-45fa-a45d-3f3e77cb8082" path="/var/lib/kubelet/pods/da273fe4-2d94-45fa-a45d-3f3e77cb8082/volumes" Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.490238 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db665e18-1785-4fa2-8477-c1710eac0146" path="/var/lib/kubelet/pods/db665e18-1785-4fa2-8477-c1710eac0146/volumes" Feb 19 08:26:45 crc kubenswrapper[5023]: I0219 08:26:45.490791 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5555e0c-d705-4ff7-842f-96152050d5d5" path="/var/lib/kubelet/pods/e5555e0c-d705-4ff7-842f-96152050d5d5/volumes" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.080813 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.178098 5023 generic.go:334] "Generic (PLEG): container finished" podID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerID="95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f" exitCode=0 Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.178143 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerDied","Data":"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f"} Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.178178 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"68916119-0370-40b3-9dd4-ed0a3aa8c0fc","Type":"ContainerDied","Data":"fd936690abbfe2ce12d6a56ce426730d92c7218fb52760e84c0405d71f958d97"} Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.178196 5023 scope.go:117] "RemoveContainer" containerID="def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.178203 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.203961 5023 scope.go:117] "RemoveContainer" containerID="a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226018 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226101 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226129 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226162 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226243 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226306 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226339 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb8sd\" (UniqueName: \"kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.226400 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data\") pod \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\" (UID: \"68916119-0370-40b3-9dd4-ed0a3aa8c0fc\") " Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.228926 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.229101 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.237793 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd" (OuterVolumeSpecName: "kube-api-access-hb8sd") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "kube-api-access-hb8sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.237881 5023 scope.go:117] "RemoveContainer" containerID="95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.237906 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts" (OuterVolumeSpecName: "scripts") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.282461 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.320811 5023 scope.go:117] "RemoveContainer" containerID="771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.321004 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327702 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327724 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327734 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb8sd\" (UniqueName: \"kubernetes.io/projected/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-kube-api-access-hb8sd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327743 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327753 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.327763 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.341835 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.355921 5023 scope.go:117] "RemoveContainer" containerID="def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.356846 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95\": container with ID starting with def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95 not found: ID does not exist" containerID="def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.356907 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95"} err="failed to get container status \"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95\": rpc error: code = NotFound desc = could not find container \"def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95\": container with ID starting with def7f983cec309499862d055b298426bade6fa2dc0ef5f60ceec4136e0161a95 not found: ID does not exist" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.356950 5023 scope.go:117] "RemoveContainer" containerID="a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.358363 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe\": container with ID starting with a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe not found: ID does not exist" containerID="a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.358432 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe"} err="failed to get container status \"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe\": rpc error: code = NotFound desc = could not find container \"a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe\": container with ID starting with a8e617fd1a68ca6db2750d901ae78362a7b46db04dc05d521d704248b2b071fe not found: ID does not exist" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.358479 5023 scope.go:117] "RemoveContainer" containerID="95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.359025 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f\": container with ID starting with 95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f not found: ID does not exist" containerID="95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.359073 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f"} err="failed to get container status \"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f\": rpc error: code = NotFound desc = could not find container \"95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f\": container with ID starting with 95ac6bbd6bf9d9e8f74e7659ecb5024154761ff72b90c1d0b97d9e3368eb5c0f not found: ID does not exist" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.359098 5023 scope.go:117] "RemoveContainer" containerID="771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.359396 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f\": container with ID starting with 771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f not found: ID does not exist" containerID="771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.359429 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f"} err="failed to get container status \"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f\": rpc error: code = NotFound desc = could not find container \"771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f\": container with ID starting with 771201e842789833704007b3ab4723bca270d34986f13ba59d2d379ff8d2521f not found: ID does not exist" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.376711 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data" (OuterVolumeSpecName: "config-data") pod "68916119-0370-40b3-9dd4-ed0a3aa8c0fc" (UID: "68916119-0370-40b3-9dd4-ed0a3aa8c0fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.429465 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.429818 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68916119-0370-40b3-9dd4-ed0a3aa8c0fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.513444 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.519206 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553203 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.553895 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-notification-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553912 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-notification-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.553923 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="proxy-httpd" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553931 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="proxy-httpd" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.553946 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" containerName="watcher-applier" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553952 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" containerName="watcher-applier" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.553964 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-api" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553970 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-api" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.553982 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db665e18-1785-4fa2-8477-c1710eac0146" containerName="mariadb-account-delete" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.553988 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="db665e18-1785-4fa2-8477-c1710eac0146" containerName="mariadb-account-delete" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.554005 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="sg-core" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554011 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="sg-core" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.554026 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-central-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554033 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-central-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.554050 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-kuttl-api-log" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554055 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-kuttl-api-log" Feb 19 08:26:46 crc kubenswrapper[5023]: E0219 08:26:46.554066 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742301d1-d2cb-4e92-8bcc-5129532c4124" containerName="watcher-decision-engine" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554072 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="742301d1-d2cb-4e92-8bcc-5129532c4124" containerName="watcher-decision-engine" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554213 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-api" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554222 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="sg-core" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554232 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="db665e18-1785-4fa2-8477-c1710eac0146" containerName="mariadb-account-delete" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554240 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f82a63-79a3-4fe0-b51b-41c32a781fa9" containerName="watcher-kuttl-api-log" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554246 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="proxy-httpd" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554257 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94b892f-04c3-42ab-867e-65d9f5ffa0b1" containerName="watcher-applier" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554269 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="742301d1-d2cb-4e92-8bcc-5129532c4124" containerName="watcher-decision-engine" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554277 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-notification-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.554286 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" containerName="ceilometer-central-agent" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.556470 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.559324 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.560122 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.562369 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.582085 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641045 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641105 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641171 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641201 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641228 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641246 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641292 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.641309 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmfdl\" (UniqueName: \"kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742574 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742673 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742726 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742756 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmfdl\" (UniqueName: \"kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742846 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742885 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742933 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.742985 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.743460 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.743823 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.748590 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.748815 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.750108 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.754697 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.763211 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmfdl\" (UniqueName: \"kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.770542 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts\") pod \"ceilometer-0\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:46 crc kubenswrapper[5023]: I0219 08:26:46.875290 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.356296 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.486699 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68916119-0370-40b3-9dd4-ed0a3aa8c0fc" path="/var/lib/kubelet/pods/68916119-0370-40b3-9dd4-ed0a3aa8c0fc/volumes" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.733793 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-sb9qn"] Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.734963 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.755394 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-sb9qn"] Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.837292 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-be6b-account-create-update-zcd79"] Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.838604 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.844098 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.863638 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmvb\" (UniqueName: \"kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.863729 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.879484 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-be6b-account-create-update-zcd79"] Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.965129 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltmvb\" (UniqueName: \"kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.965199 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.965276 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.965355 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfcn\" (UniqueName: \"kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.966260 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:47 crc kubenswrapper[5023]: I0219 08:26:47.985492 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltmvb\" (UniqueName: \"kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb\") pod \"watcher-db-create-sb9qn\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.053648 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.066723 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfcn\" (UniqueName: \"kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.066809 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.067429 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.084829 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfcn\" (UniqueName: \"kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn\") pod \"watcher-be6b-account-create-update-zcd79\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.217765 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerStarted","Data":"1495996cc6f49060a51ac41ee724a3bb7e92f0a0eb6273742c5d33f4ecd4d7fa"} Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.218248 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerStarted","Data":"9bd475342114d31367adb84e96cec8d00e8befafafeb13fe0be8972b45431fd1"} Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.272969 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:48 crc kubenswrapper[5023]: I0219 08:26:48.628130 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-sb9qn"] Feb 19 08:26:48 crc kubenswrapper[5023]: W0219 08:26:48.670045 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3cae5ca_eba1_4699_a9f1_cb42c8266469.slice/crio-f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5 WatchSource:0}: Error finding container f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5: Status 404 returned error can't find the container with id f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5 Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.053084 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-be6b-account-create-update-zcd79"] Feb 19 08:26:49 crc kubenswrapper[5023]: W0219 08:26:49.057342 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b7f513c_ac66_4a8b_8f32_9e79f665b4b8.slice/crio-d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455 WatchSource:0}: Error finding container d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455: Status 404 returned error can't find the container with id d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455 Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.228868 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerStarted","Data":"09c5fdb5068c169111b854da53ff2bc118be4f311580585abbca59aad1fc2d9c"} Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.230785 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" event={"ID":"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8","Type":"ContainerStarted","Data":"a6fcab4a38b5cd07da5e0ac3068c232d3d6d3117df2ce46a671b96f7209d4c9a"} Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.230829 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" event={"ID":"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8","Type":"ContainerStarted","Data":"d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455"} Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.233495 5023 generic.go:334] "Generic (PLEG): container finished" podID="c3cae5ca-eba1-4699-a9f1-cb42c8266469" containerID="8702df5d146cfaf8b2cec6c1fa151821e06f0fc0b9dbf78db329d0d09598d17b" exitCode=0 Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.233550 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-sb9qn" event={"ID":"c3cae5ca-eba1-4699-a9f1-cb42c8266469","Type":"ContainerDied","Data":"8702df5d146cfaf8b2cec6c1fa151821e06f0fc0b9dbf78db329d0d09598d17b"} Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.233592 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-sb9qn" event={"ID":"c3cae5ca-eba1-4699-a9f1-cb42c8266469","Type":"ContainerStarted","Data":"f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5"} Feb 19 08:26:49 crc kubenswrapper[5023]: I0219 08:26:49.255841 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" podStartSLOduration=2.255816007 podStartE2EDuration="2.255816007s" podCreationTimestamp="2026-02-19 08:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:26:49.245866143 +0000 UTC m=+1566.902985091" watchObservedRunningTime="2026-02-19 08:26:49.255816007 +0000 UTC m=+1566.912934965" Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.246435 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerStarted","Data":"e7a3e36ed78aba032b84d09f3f0e04eb14d5de56f6dfb7b3e0e4465dfc4894e6"} Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.248277 5023 generic.go:334] "Generic (PLEG): container finished" podID="3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" containerID="a6fcab4a38b5cd07da5e0ac3068c232d3d6d3117df2ce46a671b96f7209d4c9a" exitCode=0 Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.248392 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" event={"ID":"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8","Type":"ContainerDied","Data":"a6fcab4a38b5cd07da5e0ac3068c232d3d6d3117df2ce46a671b96f7209d4c9a"} Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.686728 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.719536 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltmvb\" (UniqueName: \"kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb\") pod \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.719788 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts\") pod \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\" (UID: \"c3cae5ca-eba1-4699-a9f1-cb42c8266469\") " Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.720329 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3cae5ca-eba1-4699-a9f1-cb42c8266469" (UID: "c3cae5ca-eba1-4699-a9f1-cb42c8266469"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.728761 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb" (OuterVolumeSpecName: "kube-api-access-ltmvb") pod "c3cae5ca-eba1-4699-a9f1-cb42c8266469" (UID: "c3cae5ca-eba1-4699-a9f1-cb42c8266469"). InnerVolumeSpecName "kube-api-access-ltmvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.822031 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3cae5ca-eba1-4699-a9f1-cb42c8266469-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:50 crc kubenswrapper[5023]: I0219 08:26:50.822097 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltmvb\" (UniqueName: \"kubernetes.io/projected/c3cae5ca-eba1-4699-a9f1-cb42c8266469-kube-api-access-ltmvb\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.257044 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-sb9qn" event={"ID":"c3cae5ca-eba1-4699-a9f1-cb42c8266469","Type":"ContainerDied","Data":"f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5"} Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.257461 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82a316c873ea94849c0180ad42aeecd3bfee9bee60fd3faca8a8897186cf2e5" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.257305 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-sb9qn" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.260715 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerStarted","Data":"126f669dbff747d2b068d8cfeb4429339f276050f4cbb50520301a307708142a"} Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.260779 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.289372 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7121379700000001 podStartE2EDuration="5.289354241s" podCreationTimestamp="2026-02-19 08:26:46 +0000 UTC" firstStartedPulling="2026-02-19 08:26:47.365715972 +0000 UTC m=+1565.022834920" lastFinishedPulling="2026-02-19 08:26:50.942932243 +0000 UTC m=+1568.600051191" observedRunningTime="2026-02-19 08:26:51.288079877 +0000 UTC m=+1568.945198825" watchObservedRunningTime="2026-02-19 08:26:51.289354241 +0000 UTC m=+1568.946473189" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.613248 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.735172 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zfcn\" (UniqueName: \"kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn\") pod \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.735445 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts\") pod \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\" (UID: \"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8\") " Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.744903 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" (UID: "3b7f513c-ac66-4a8b-8f32-9e79f665b4b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.748102 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn" (OuterVolumeSpecName: "kube-api-access-2zfcn") pod "3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" (UID: "3b7f513c-ac66-4a8b-8f32-9e79f665b4b8"). InnerVolumeSpecName "kube-api-access-2zfcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.838095 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zfcn\" (UniqueName: \"kubernetes.io/projected/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-kube-api-access-2zfcn\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:51 crc kubenswrapper[5023]: I0219 08:26:51.838323 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:52 crc kubenswrapper[5023]: I0219 08:26:52.269962 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" event={"ID":"3b7f513c-ac66-4a8b-8f32-9e79f665b4b8","Type":"ContainerDied","Data":"d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455"} Feb 19 08:26:52 crc kubenswrapper[5023]: I0219 08:26:52.270014 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d628279a73f0c60548a2a6ed7790d3973285c99707142500ecb14594ba4c6455" Feb 19 08:26:52 crc kubenswrapper[5023]: I0219 08:26:52.270097 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-be6b-account-create-update-zcd79" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.667805 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9"] Feb 19 08:26:53 crc kubenswrapper[5023]: E0219 08:26:53.668382 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cae5ca-eba1-4699-a9f1-cb42c8266469" containerName="mariadb-database-create" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.668393 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cae5ca-eba1-4699-a9f1-cb42c8266469" containerName="mariadb-database-create" Feb 19 08:26:53 crc kubenswrapper[5023]: E0219 08:26:53.668414 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" containerName="mariadb-account-create-update" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.668420 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" containerName="mariadb-account-create-update" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.668556 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" containerName="mariadb-account-create-update" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.668568 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cae5ca-eba1-4699-a9f1-cb42c8266469" containerName="mariadb-database-create" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.669090 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.671101 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.673167 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wqs7c" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.679930 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9"] Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.771862 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.771980 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.772063 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gk8c\" (UniqueName: \"kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.772193 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.873576 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.873693 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.873750 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gk8c\" (UniqueName: \"kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.873789 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.877527 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.878195 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.879096 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.891016 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gk8c\" (UniqueName: \"kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c\") pod \"watcher-kuttl-db-sync-w7xr9\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:53 crc kubenswrapper[5023]: I0219 08:26:53.985130 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:54 crc kubenswrapper[5023]: I0219 08:26:54.444957 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9"] Feb 19 08:26:55 crc kubenswrapper[5023]: I0219 08:26:55.299567 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" event={"ID":"500f927a-92c7-4359-ad14-95b8442525c7","Type":"ContainerStarted","Data":"56259fe60dda93b6b16493089c1647dc830d702813b9fb73b3be3aa72b9fb691"} Feb 19 08:26:55 crc kubenswrapper[5023]: I0219 08:26:55.299960 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" event={"ID":"500f927a-92c7-4359-ad14-95b8442525c7","Type":"ContainerStarted","Data":"37046c53c4c2167a6694226465d55d1bde1f1d41026248b625d2aeabd5c3240e"} Feb 19 08:26:55 crc kubenswrapper[5023]: I0219 08:26:55.316114 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" podStartSLOduration=2.316096158 podStartE2EDuration="2.316096158s" podCreationTimestamp="2026-02-19 08:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:26:55.313469358 +0000 UTC m=+1572.970588306" watchObservedRunningTime="2026-02-19 08:26:55.316096158 +0000 UTC m=+1572.973215106" Feb 19 08:26:57 crc kubenswrapper[5023]: I0219 08:26:57.315982 5023 generic.go:334] "Generic (PLEG): container finished" podID="500f927a-92c7-4359-ad14-95b8442525c7" containerID="56259fe60dda93b6b16493089c1647dc830d702813b9fb73b3be3aa72b9fb691" exitCode=0 Feb 19 08:26:57 crc kubenswrapper[5023]: I0219 08:26:57.316153 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" event={"ID":"500f927a-92c7-4359-ad14-95b8442525c7","Type":"ContainerDied","Data":"56259fe60dda93b6b16493089c1647dc830d702813b9fb73b3be3aa72b9fb691"} Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.736683 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.848291 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle\") pod \"500f927a-92c7-4359-ad14-95b8442525c7\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.848342 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gk8c\" (UniqueName: \"kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c\") pod \"500f927a-92c7-4359-ad14-95b8442525c7\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.848393 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data\") pod \"500f927a-92c7-4359-ad14-95b8442525c7\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.848490 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data\") pod \"500f927a-92c7-4359-ad14-95b8442525c7\" (UID: \"500f927a-92c7-4359-ad14-95b8442525c7\") " Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.852823 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "500f927a-92c7-4359-ad14-95b8442525c7" (UID: "500f927a-92c7-4359-ad14-95b8442525c7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.853235 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c" (OuterVolumeSpecName: "kube-api-access-7gk8c") pod "500f927a-92c7-4359-ad14-95b8442525c7" (UID: "500f927a-92c7-4359-ad14-95b8442525c7"). InnerVolumeSpecName "kube-api-access-7gk8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.872302 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "500f927a-92c7-4359-ad14-95b8442525c7" (UID: "500f927a-92c7-4359-ad14-95b8442525c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.899593 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data" (OuterVolumeSpecName: "config-data") pod "500f927a-92c7-4359-ad14-95b8442525c7" (UID: "500f927a-92c7-4359-ad14-95b8442525c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.950048 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.950091 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.950104 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500f927a-92c7-4359-ad14-95b8442525c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:58 crc kubenswrapper[5023]: I0219 08:26:58.950112 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gk8c\" (UniqueName: \"kubernetes.io/projected/500f927a-92c7-4359-ad14-95b8442525c7-kube-api-access-7gk8c\") on node \"crc\" DevicePath \"\"" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.340437 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" event={"ID":"500f927a-92c7-4359-ad14-95b8442525c7","Type":"ContainerDied","Data":"37046c53c4c2167a6694226465d55d1bde1f1d41026248b625d2aeabd5c3240e"} Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.340482 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37046c53c4c2167a6694226465d55d1bde1f1d41026248b625d2aeabd5c3240e" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.340504 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.597179 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: E0219 08:26:59.597526 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500f927a-92c7-4359-ad14-95b8442525c7" containerName="watcher-kuttl-db-sync" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.597545 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="500f927a-92c7-4359-ad14-95b8442525c7" containerName="watcher-kuttl-db-sync" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.597697 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="500f927a-92c7-4359-ad14-95b8442525c7" containerName="watcher-kuttl-db-sync" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.598220 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.600327 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.601117 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wqs7c" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.604669 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.606216 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.607837 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.610738 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.639124 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.684803 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.684887 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.684921 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.684984 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685011 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxvqm\" (UniqueName: \"kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685054 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685126 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685163 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685220 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685256 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685407 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.685468 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6lc\" (UniqueName: \"kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.698103 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.699169 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.723889 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.761921 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786353 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786412 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786428 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786462 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pncs5\" (UniqueName: \"kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786482 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786500 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxvqm\" (UniqueName: \"kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786514 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786536 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786553 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786569 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786601 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786634 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786659 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786677 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786692 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786708 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.786730 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6lc\" (UniqueName: \"kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.789075 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.789958 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.792444 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.792680 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.792924 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.793040 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.793945 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.794516 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.795473 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.804453 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.804797 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxvqm\" (UniqueName: \"kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm\") pod \"watcher-kuttl-api-0\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.806835 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6lc\" (UniqueName: \"kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.888031 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.888092 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.888118 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.888196 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.888308 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pncs5\" (UniqueName: \"kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.889605 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.898455 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.898579 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.902470 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.907381 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pncs5\" (UniqueName: \"kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5\") pod \"watcher-kuttl-applier-0\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.915932 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:26:59 crc kubenswrapper[5023]: I0219 08:26:59.922058 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:00 crc kubenswrapper[5023]: I0219 08:27:00.072705 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:00 crc kubenswrapper[5023]: I0219 08:27:00.403010 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:00 crc kubenswrapper[5023]: W0219 08:27:00.404677 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e46deac_8166_4479_8fd0_31e57a53cedf.slice/crio-71209eedad4fb83e3237daf9015d975a981cd534cc06dd4d7b760c58ee3bda04 WatchSource:0}: Error finding container 71209eedad4fb83e3237daf9015d975a981cd534cc06dd4d7b760c58ee3bda04: Status 404 returned error can't find the container with id 71209eedad4fb83e3237daf9015d975a981cd534cc06dd4d7b760c58ee3bda04 Feb 19 08:27:00 crc kubenswrapper[5023]: I0219 08:27:00.469603 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:00 crc kubenswrapper[5023]: I0219 08:27:00.557019 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:00 crc kubenswrapper[5023]: W0219 08:27:00.569038 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce4b66d_d848_43ec_96cb_3f593709964a.slice/crio-91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3 WatchSource:0}: Error finding container 91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3: Status 404 returned error can't find the container with id 91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3 Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.360416 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1cec6a78-c63b-4db4-af9b-30815eb223c5","Type":"ContainerStarted","Data":"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.360804 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1cec6a78-c63b-4db4-af9b-30815eb223c5","Type":"ContainerStarted","Data":"60b202dfccc74563548193596aad4306e4f06338a6348e6695b23dfeb681f9ae"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.362726 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerStarted","Data":"a17a58d80b937064eab21695c3c20f1ca9d240d0bfcbbbe8d70cfb05e3a77caa"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.362763 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerStarted","Data":"04571c3105036ceeadf770895d75b7d4f5210253fc9bcad7d6e52e8343cf44b0"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.362782 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerStarted","Data":"71209eedad4fb83e3237daf9015d975a981cd534cc06dd4d7b760c58ee3bda04"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.362939 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.364751 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4ce4b66d-d848-43ec-96cb-3f593709964a","Type":"ContainerStarted","Data":"c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.364795 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4ce4b66d-d848-43ec-96cb-3f593709964a","Type":"ContainerStarted","Data":"91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3"} Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.394295 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.394277733 podStartE2EDuration="2.394277733s" podCreationTimestamp="2026-02-19 08:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:01.387797021 +0000 UTC m=+1579.044915999" watchObservedRunningTime="2026-02-19 08:27:01.394277733 +0000 UTC m=+1579.051396681" Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.431092 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.4310656 podStartE2EDuration="2.4310656s" podCreationTimestamp="2026-02-19 08:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:01.423349935 +0000 UTC m=+1579.080468883" watchObservedRunningTime="2026-02-19 08:27:01.4310656 +0000 UTC m=+1579.088184538" Feb 19 08:27:01 crc kubenswrapper[5023]: I0219 08:27:01.449864 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.449838158 podStartE2EDuration="2.449838158s" podCreationTimestamp="2026-02-19 08:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:01.444064945 +0000 UTC m=+1579.101183913" watchObservedRunningTime="2026-02-19 08:27:01.449838158 +0000 UTC m=+1579.106957106" Feb 19 08:27:03 crc kubenswrapper[5023]: I0219 08:27:03.707323 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:04 crc kubenswrapper[5023]: I0219 08:27:04.923359 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:05 crc kubenswrapper[5023]: I0219 08:27:05.072972 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:09 crc kubenswrapper[5023]: I0219 08:27:09.916917 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:09 crc kubenswrapper[5023]: I0219 08:27:09.922861 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:09 crc kubenswrapper[5023]: I0219 08:27:09.931468 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:09 crc kubenswrapper[5023]: I0219 08:27:09.944724 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.073001 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.098565 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.436909 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.445914 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.460295 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:10 crc kubenswrapper[5023]: I0219 08:27:10.481249 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.668571 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.669177 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-central-agent" containerID="cri-o://1495996cc6f49060a51ac41ee724a3bb7e92f0a0eb6273742c5d33f4ecd4d7fa" gracePeriod=30 Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.670948 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="proxy-httpd" containerID="cri-o://126f669dbff747d2b068d8cfeb4429339f276050f4cbb50520301a307708142a" gracePeriod=30 Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.671035 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="sg-core" containerID="cri-o://e7a3e36ed78aba032b84d09f3f0e04eb14d5de56f6dfb7b3e0e4465dfc4894e6" gracePeriod=30 Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.671055 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-notification-agent" containerID="cri-o://09c5fdb5068c169111b854da53ff2bc118be4f311580585abbca59aad1fc2d9c" gracePeriod=30 Feb 19 08:27:12 crc kubenswrapper[5023]: I0219 08:27:12.680160 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462754 5023 generic.go:334] "Generic (PLEG): container finished" podID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerID="126f669dbff747d2b068d8cfeb4429339f276050f4cbb50520301a307708142a" exitCode=0 Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462786 5023 generic.go:334] "Generic (PLEG): container finished" podID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerID="e7a3e36ed78aba032b84d09f3f0e04eb14d5de56f6dfb7b3e0e4465dfc4894e6" exitCode=2 Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462795 5023 generic.go:334] "Generic (PLEG): container finished" podID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerID="1495996cc6f49060a51ac41ee724a3bb7e92f0a0eb6273742c5d33f4ecd4d7fa" exitCode=0 Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462816 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerDied","Data":"126f669dbff747d2b068d8cfeb4429339f276050f4cbb50520301a307708142a"} Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462842 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerDied","Data":"e7a3e36ed78aba032b84d09f3f0e04eb14d5de56f6dfb7b3e0e4465dfc4894e6"} Feb 19 08:27:13 crc kubenswrapper[5023]: I0219 08:27:13.462853 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerDied","Data":"1495996cc6f49060a51ac41ee724a3bb7e92f0a0eb6273742c5d33f4ecd4d7fa"} Feb 19 08:27:15 crc kubenswrapper[5023]: I0219 08:27:15.485035 5023 generic.go:334] "Generic (PLEG): container finished" podID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerID="09c5fdb5068c169111b854da53ff2bc118be4f311580585abbca59aad1fc2d9c" exitCode=0 Feb 19 08:27:15 crc kubenswrapper[5023]: I0219 08:27:15.487645 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerDied","Data":"09c5fdb5068c169111b854da53ff2bc118be4f311580585abbca59aad1fc2d9c"} Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.066867 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165386 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165478 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165535 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165572 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165694 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmfdl\" (UniqueName: \"kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165742 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165765 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165868 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs\") pod \"85cc0e82-553a-4d25-be20-03fcb8b35b67\" (UID: \"85cc0e82-553a-4d25-be20-03fcb8b35b67\") " Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.165940 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.166248 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.166341 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.170893 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts" (OuterVolumeSpecName: "scripts") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.171048 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl" (OuterVolumeSpecName: "kube-api-access-rmfdl") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "kube-api-access-rmfdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.188812 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.218816 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.247945 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.259332 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data" (OuterVolumeSpecName: "config-data") pod "85cc0e82-553a-4d25-be20-03fcb8b35b67" (UID: "85cc0e82-553a-4d25-be20-03fcb8b35b67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267310 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267337 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmfdl\" (UniqueName: \"kubernetes.io/projected/85cc0e82-553a-4d25-be20-03fcb8b35b67-kube-api-access-rmfdl\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267351 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85cc0e82-553a-4d25-be20-03fcb8b35b67-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267365 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267377 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267386 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.267395 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc0e82-553a-4d25-be20-03fcb8b35b67-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.498503 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"85cc0e82-553a-4d25-be20-03fcb8b35b67","Type":"ContainerDied","Data":"9bd475342114d31367adb84e96cec8d00e8befafafeb13fe0be8972b45431fd1"} Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.498751 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.498919 5023 scope.go:117] "RemoveContainer" containerID="126f669dbff747d2b068d8cfeb4429339f276050f4cbb50520301a307708142a" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.530335 5023 scope.go:117] "RemoveContainer" containerID="e7a3e36ed78aba032b84d09f3f0e04eb14d5de56f6dfb7b3e0e4465dfc4894e6" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.544421 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.568973 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.571162 5023 scope.go:117] "RemoveContainer" containerID="09c5fdb5068c169111b854da53ff2bc118be4f311580585abbca59aad1fc2d9c" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.578374 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:16 crc kubenswrapper[5023]: E0219 08:27:16.578951 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-central-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.578971 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-central-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: E0219 08:27:16.578994 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-notification-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579003 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-notification-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: E0219 08:27:16.579014 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="sg-core" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579023 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="sg-core" Feb 19 08:27:16 crc kubenswrapper[5023]: E0219 08:27:16.579041 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="proxy-httpd" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579049 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="proxy-httpd" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579276 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="proxy-httpd" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579291 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="sg-core" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579316 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-notification-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.579328 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" containerName="ceilometer-central-agent" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.582008 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.583977 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.626534 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.626731 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.626981 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.657500 5023 scope.go:117] "RemoveContainer" containerID="1495996cc6f49060a51ac41ee724a3bb7e92f0a0eb6273742c5d33f4ecd4d7fa" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675640 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675713 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhrp\" (UniqueName: \"kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675796 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675839 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675858 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675879 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.675945 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.676017 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.777673 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.778682 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.778908 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779021 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779210 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779376 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779490 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhrp\" (UniqueName: \"kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779609 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.778334 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.779209 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.783236 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.797964 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.798448 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.799381 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.804943 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.810449 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhrp\" (UniqueName: \"kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp\") pod \"ceilometer-0\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:16 crc kubenswrapper[5023]: I0219 08:27:16.947330 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.078064 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.082903 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-w7xr9"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.129659 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherbe6b-account-delete-dpd9r"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.131026 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.145947 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.146195 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1cec6a78-c63b-4db4-af9b-30815eb223c5" containerName="watcher-decision-engine" containerID="cri-o://cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4" gracePeriod=30 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.181698 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbe6b-account-delete-dpd9r"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.193763 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqchp\" (UniqueName: \"kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.193883 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.222171 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.222400 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerName="watcher-applier" containerID="cri-o://c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" gracePeriod=30 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.296947 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.297039 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqchp\" (UniqueName: \"kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.298000 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.330454 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqchp\" (UniqueName: \"kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp\") pod \"watcherbe6b-account-delete-dpd9r\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.366479 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.366718 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-kuttl-api-log" containerID="cri-o://04571c3105036ceeadf770895d75b7d4f5210253fc9bcad7d6e52e8343cf44b0" gracePeriod=30 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.366853 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-api" containerID="cri-o://a17a58d80b937064eab21695c3c20f1ca9d240d0bfcbbbe8d70cfb05e3a77caa" gracePeriod=30 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.457874 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.521805 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500f927a-92c7-4359-ad14-95b8442525c7" path="/var/lib/kubelet/pods/500f927a-92c7-4359-ad14-95b8442525c7/volumes" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.522576 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85cc0e82-553a-4d25-be20-03fcb8b35b67" path="/var/lib/kubelet/pods/85cc0e82-553a-4d25-be20-03fcb8b35b67/volumes" Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.543816 5023 generic.go:334] "Generic (PLEG): container finished" podID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerID="04571c3105036ceeadf770895d75b7d4f5210253fc9bcad7d6e52e8343cf44b0" exitCode=143 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.543902 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerDied","Data":"04571c3105036ceeadf770895d75b7d4f5210253fc9bcad7d6e52e8343cf44b0"} Feb 19 08:27:17 crc kubenswrapper[5023]: W0219 08:27:17.713088 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7098b37_5e49_4763_a788_910722be2533.slice/crio-e6b509bba1b7ed651579dd76ca75010f97a1a08c57fee63dded86fc012193cf9 WatchSource:0}: Error finding container e6b509bba1b7ed651579dd76ca75010f97a1a08c57fee63dded86fc012193cf9: Status 404 returned error can't find the container with id e6b509bba1b7ed651579dd76ca75010f97a1a08c57fee63dded86fc012193cf9 Feb 19 08:27:17 crc kubenswrapper[5023]: I0219 08:27:17.716650 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.014499 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherbe6b-account-delete-dpd9r"] Feb 19 08:27:18 crc kubenswrapper[5023]: W0219 08:27:18.015775 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8189a688_7287_4e52_97b3_0acfd5107516.slice/crio-b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f WatchSource:0}: Error finding container b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f: Status 404 returned error can't find the container with id b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.557970 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerStarted","Data":"1bc20a546ce85bd370faaa3d2bc714e9e7569ac0bc548f6a0a483419d059f6e2"} Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.558595 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerStarted","Data":"e6b509bba1b7ed651579dd76ca75010f97a1a08c57fee63dded86fc012193cf9"} Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.561109 5023 generic.go:334] "Generic (PLEG): container finished" podID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerID="a17a58d80b937064eab21695c3c20f1ca9d240d0bfcbbbe8d70cfb05e3a77caa" exitCode=0 Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.561154 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerDied","Data":"a17a58d80b937064eab21695c3c20f1ca9d240d0bfcbbbe8d70cfb05e3a77caa"} Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.562869 5023 generic.go:334] "Generic (PLEG): container finished" podID="8189a688-7287-4e52-97b3-0acfd5107516" containerID="3c0fdae20f71309bbcc68e65c7e6230a9b0c3259bef73a5ec32e7f8dcb71096f" exitCode=0 Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.562918 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" event={"ID":"8189a688-7287-4e52-97b3-0acfd5107516","Type":"ContainerDied","Data":"3c0fdae20f71309bbcc68e65c7e6230a9b0c3259bef73a5ec32e7f8dcb71096f"} Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.562947 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" event={"ID":"8189a688-7287-4e52-97b3-0acfd5107516","Type":"ContainerStarted","Data":"b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f"} Feb 19 08:27:18 crc kubenswrapper[5023]: I0219 08:27:18.942909 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060021 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060090 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060185 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxvqm\" (UniqueName: \"kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060245 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060271 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.060292 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data\") pod \"1e46deac-8166-4479-8fd0-31e57a53cedf\" (UID: \"1e46deac-8166-4479-8fd0-31e57a53cedf\") " Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.061203 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs" (OuterVolumeSpecName: "logs") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.075850 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm" (OuterVolumeSpecName: "kube-api-access-lxvqm") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "kube-api-access-lxvqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.083143 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.086528 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.103598 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data" (OuterVolumeSpecName: "config-data") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.137441 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1e46deac-8166-4479-8fd0-31e57a53cedf" (UID: "1e46deac-8166-4479-8fd0-31e57a53cedf"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162583 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxvqm\" (UniqueName: \"kubernetes.io/projected/1e46deac-8166-4479-8fd0-31e57a53cedf-kube-api-access-lxvqm\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162636 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e46deac-8166-4479-8fd0-31e57a53cedf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162652 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162662 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162674 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.162685 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1e46deac-8166-4479-8fd0-31e57a53cedf-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.581021 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerStarted","Data":"3a6aa52e63a2e6fb42d8c3de8db42703e7d5fb2d9814a258067f177ecbd8e5c6"} Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.583714 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.584648 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1e46deac-8166-4479-8fd0-31e57a53cedf","Type":"ContainerDied","Data":"71209eedad4fb83e3237daf9015d975a981cd534cc06dd4d7b760c58ee3bda04"} Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.584694 5023 scope.go:117] "RemoveContainer" containerID="a17a58d80b937064eab21695c3c20f1ca9d240d0bfcbbbe8d70cfb05e3a77caa" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.613756 5023 scope.go:117] "RemoveContainer" containerID="04571c3105036ceeadf770895d75b7d4f5210253fc9bcad7d6e52e8343cf44b0" Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.614696 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:19 crc kubenswrapper[5023]: I0219 08:27:19.622035 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.020701 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.053459 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:20 crc kubenswrapper[5023]: E0219 08:27:20.074479 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:20 crc kubenswrapper[5023]: E0219 08:27:20.075851 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.076265 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqchp\" (UniqueName: \"kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp\") pod \"8189a688-7287-4e52-97b3-0acfd5107516\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.076506 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts\") pod \"8189a688-7287-4e52-97b3-0acfd5107516\" (UID: \"8189a688-7287-4e52-97b3-0acfd5107516\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.077184 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8189a688-7287-4e52-97b3-0acfd5107516" (UID: "8189a688-7287-4e52-97b3-0acfd5107516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: E0219 08:27:20.077853 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:20 crc kubenswrapper[5023]: E0219 08:27:20.077922 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerName="watcher-applier" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.091889 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp" (OuterVolumeSpecName: "kube-api-access-lqchp") pod "8189a688-7287-4e52-97b3-0acfd5107516" (UID: "8189a688-7287-4e52-97b3-0acfd5107516"). InnerVolumeSpecName "kube-api-access-lqchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.178068 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqchp\" (UniqueName: \"kubernetes.io/projected/8189a688-7287-4e52-97b3-0acfd5107516-kube-api-access-lqchp\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.178103 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8189a688-7287-4e52-97b3-0acfd5107516-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: E0219 08:27:20.409334 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce4b66d_d848_43ec_96cb_3f593709964a.slice/crio-c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce4b66d_d848_43ec_96cb_3f593709964a.slice/crio-conmon-c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.593246 5023 generic.go:334] "Generic (PLEG): container finished" podID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerID="c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" exitCode=0 Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.593329 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4ce4b66d-d848-43ec-96cb-3f593709964a","Type":"ContainerDied","Data":"c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc"} Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.593357 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"4ce4b66d-d848-43ec-96cb-3f593709964a","Type":"ContainerDied","Data":"91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3"} Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.593367 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e07b3c614e6a70daa50999724c667a54073afec5b36bf5d7556bbdc4218dc3" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.597710 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerStarted","Data":"25b5616ad4f221d4a96e691ba73003337695d23cf6dcb7746df84354dda39c6c"} Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.600889 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" event={"ID":"8189a688-7287-4e52-97b3-0acfd5107516","Type":"ContainerDied","Data":"b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f"} Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.600912 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b762d910eb7b106ee3f95cc8a6cbd11461c3e8f2b1af89294be406737718583f" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.600957 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherbe6b-account-delete-dpd9r" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.643147 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.786792 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs\") pod \"4ce4b66d-d848-43ec-96cb-3f593709964a\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787079 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls\") pod \"4ce4b66d-d848-43ec-96cb-3f593709964a\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787180 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data\") pod \"4ce4b66d-d848-43ec-96cb-3f593709964a\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787306 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pncs5\" (UniqueName: \"kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5\") pod \"4ce4b66d-d848-43ec-96cb-3f593709964a\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787346 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs" (OuterVolumeSpecName: "logs") pod "4ce4b66d-d848-43ec-96cb-3f593709964a" (UID: "4ce4b66d-d848-43ec-96cb-3f593709964a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787522 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle\") pod \"4ce4b66d-d848-43ec-96cb-3f593709964a\" (UID: \"4ce4b66d-d848-43ec-96cb-3f593709964a\") " Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.787977 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ce4b66d-d848-43ec-96cb-3f593709964a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.791609 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5" (OuterVolumeSpecName: "kube-api-access-pncs5") pod "4ce4b66d-d848-43ec-96cb-3f593709964a" (UID: "4ce4b66d-d848-43ec-96cb-3f593709964a"). InnerVolumeSpecName "kube-api-access-pncs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.832776 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ce4b66d-d848-43ec-96cb-3f593709964a" (UID: "4ce4b66d-d848-43ec-96cb-3f593709964a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.867001 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data" (OuterVolumeSpecName: "config-data") pod "4ce4b66d-d848-43ec-96cb-3f593709964a" (UID: "4ce4b66d-d848-43ec-96cb-3f593709964a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.874902 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4ce4b66d-d848-43ec-96cb-3f593709964a" (UID: "4ce4b66d-d848-43ec-96cb-3f593709964a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.889441 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.889485 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.889499 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pncs5\" (UniqueName: \"kubernetes.io/projected/4ce4b66d-d848-43ec-96cb-3f593709964a-kube-api-access-pncs5\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:20 crc kubenswrapper[5023]: I0219 08:27:20.889511 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce4b66d-d848-43ec-96cb-3f593709964a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.495111 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.495139 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" path="/var/lib/kubelet/pods/1e46deac-8166-4479-8fd0-31e57a53cedf/volumes" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.601911 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.602285 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.602327 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.602358 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt6lc\" (UniqueName: \"kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.602380 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.602472 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle\") pod \"1cec6a78-c63b-4db4-af9b-30815eb223c5\" (UID: \"1cec6a78-c63b-4db4-af9b-30815eb223c5\") " Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.615474 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc" (OuterVolumeSpecName: "kube-api-access-xt6lc") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "kube-api-access-xt6lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.622128 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs" (OuterVolumeSpecName: "logs") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.629113 5023 generic.go:334] "Generic (PLEG): container finished" podID="1cec6a78-c63b-4db4-af9b-30815eb223c5" containerID="cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4" exitCode=0 Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.629186 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1cec6a78-c63b-4db4-af9b-30815eb223c5","Type":"ContainerDied","Data":"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4"} Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.629222 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1cec6a78-c63b-4db4-af9b-30815eb223c5","Type":"ContainerDied","Data":"60b202dfccc74563548193596aad4306e4f06338a6348e6695b23dfeb681f9ae"} Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.629261 5023 scope.go:117] "RemoveContainer" containerID="cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.629662 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635258 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerStarted","Data":"2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db"} Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635324 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635451 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="proxy-httpd" containerID="cri-o://2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db" gracePeriod=30 Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635519 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="sg-core" containerID="cri-o://25b5616ad4f221d4a96e691ba73003337695d23cf6dcb7746df84354dda39c6c" gracePeriod=30 Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.634888 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635743 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-central-agent" containerID="cri-o://1bc20a546ce85bd370faaa3d2bc714e9e7569ac0bc548f6a0a483419d059f6e2" gracePeriod=30 Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.635921 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-notification-agent" containerID="cri-o://3a6aa52e63a2e6fb42d8c3de8db42703e7d5fb2d9814a258067f177ecbd8e5c6" gracePeriod=30 Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.637021 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.681445 5023 scope.go:117] "RemoveContainer" containerID="cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4" Feb 19 08:27:21 crc kubenswrapper[5023]: E0219 08:27:21.684022 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4\": container with ID starting with cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4 not found: ID does not exist" containerID="cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.684060 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4"} err="failed to get container status \"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4\": rpc error: code = NotFound desc = could not find container \"cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4\": container with ID starting with cbbafecee6b285b737c9ad96e6c35006b3c4e2f44be92945502679aaddfdffa4 not found: ID does not exist" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.686949 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data" (OuterVolumeSpecName: "config-data") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.691681 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.091245136 podStartE2EDuration="5.691661943s" podCreationTimestamp="2026-02-19 08:27:16 +0000 UTC" firstStartedPulling="2026-02-19 08:27:17.723417189 +0000 UTC m=+1595.380536137" lastFinishedPulling="2026-02-19 08:27:21.323833996 +0000 UTC m=+1598.980952944" observedRunningTime="2026-02-19 08:27:21.661252195 +0000 UTC m=+1599.318371163" watchObservedRunningTime="2026-02-19 08:27:21.691661943 +0000 UTC m=+1599.348780891" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.692226 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.696980 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705549 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705599 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705638 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cec6a78-c63b-4db4-af9b-30815eb223c5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705648 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705658 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt6lc\" (UniqueName: \"kubernetes.io/projected/1cec6a78-c63b-4db4-af9b-30815eb223c5-kube-api-access-xt6lc\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.705668 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.754650 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1cec6a78-c63b-4db4-af9b-30815eb223c5" (UID: "1cec6a78-c63b-4db4-af9b-30815eb223c5"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.806718 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cec6a78-c63b-4db4-af9b-30815eb223c5-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.964840 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:21 crc kubenswrapper[5023]: I0219 08:27:21.973189 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.146020 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-sb9qn"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.153989 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-sb9qn"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.162735 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherbe6b-account-delete-dpd9r"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.168840 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-be6b-account-create-update-zcd79"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.174913 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherbe6b-account-delete-dpd9r"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.180955 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-be6b-account-create-update-zcd79"] Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648198 5023 generic.go:334] "Generic (PLEG): container finished" podID="f7098b37-5e49-4763-a788-910722be2533" containerID="25b5616ad4f221d4a96e691ba73003337695d23cf6dcb7746df84354dda39c6c" exitCode=2 Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648232 5023 generic.go:334] "Generic (PLEG): container finished" podID="f7098b37-5e49-4763-a788-910722be2533" containerID="3a6aa52e63a2e6fb42d8c3de8db42703e7d5fb2d9814a258067f177ecbd8e5c6" exitCode=0 Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648240 5023 generic.go:334] "Generic (PLEG): container finished" podID="f7098b37-5e49-4763-a788-910722be2533" containerID="1bc20a546ce85bd370faaa3d2bc714e9e7569ac0bc548f6a0a483419d059f6e2" exitCode=0 Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648312 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerDied","Data":"25b5616ad4f221d4a96e691ba73003337695d23cf6dcb7746df84354dda39c6c"} Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648383 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerDied","Data":"3a6aa52e63a2e6fb42d8c3de8db42703e7d5fb2d9814a258067f177ecbd8e5c6"} Feb 19 08:27:22 crc kubenswrapper[5023]: I0219 08:27:22.648398 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerDied","Data":"1bc20a546ce85bd370faaa3d2bc714e9e7569ac0bc548f6a0a483419d059f6e2"} Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.256842 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-f2lv4"] Feb 19 08:27:23 crc kubenswrapper[5023]: E0219 08:27:23.257535 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerName="watcher-applier" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257552 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerName="watcher-applier" Feb 19 08:27:23 crc kubenswrapper[5023]: E0219 08:27:23.257563 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-api" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257570 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-api" Feb 19 08:27:23 crc kubenswrapper[5023]: E0219 08:27:23.257579 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189a688-7287-4e52-97b3-0acfd5107516" containerName="mariadb-account-delete" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257608 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189a688-7287-4e52-97b3-0acfd5107516" containerName="mariadb-account-delete" Feb 19 08:27:23 crc kubenswrapper[5023]: E0219 08:27:23.257637 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cec6a78-c63b-4db4-af9b-30815eb223c5" containerName="watcher-decision-engine" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257643 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cec6a78-c63b-4db4-af9b-30815eb223c5" containerName="watcher-decision-engine" Feb 19 08:27:23 crc kubenswrapper[5023]: E0219 08:27:23.257664 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-kuttl-api-log" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257673 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-kuttl-api-log" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257819 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" containerName="watcher-applier" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257830 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-api" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257842 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e46deac-8166-4479-8fd0-31e57a53cedf" containerName="watcher-kuttl-api-log" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257853 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cec6a78-c63b-4db4-af9b-30815eb223c5" containerName="watcher-decision-engine" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.257860 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8189a688-7287-4e52-97b3-0acfd5107516" containerName="mariadb-account-delete" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.258373 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.285548 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-f2lv4"] Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.334263 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw57g\" (UniqueName: \"kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.334586 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.367575 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6"] Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.368646 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.370828 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.384195 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6"] Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.436351 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw57g\" (UniqueName: \"kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.436444 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlzn\" (UniqueName: \"kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.436484 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.436697 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.437537 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.464244 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw57g\" (UniqueName: \"kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g\") pod \"watcher-db-create-f2lv4\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.495319 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cec6a78-c63b-4db4-af9b-30815eb223c5" path="/var/lib/kubelet/pods/1cec6a78-c63b-4db4-af9b-30815eb223c5/volumes" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.496208 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7f513c-ac66-4a8b-8f32-9e79f665b4b8" path="/var/lib/kubelet/pods/3b7f513c-ac66-4a8b-8f32-9e79f665b4b8/volumes" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.496920 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce4b66d-d848-43ec-96cb-3f593709964a" path="/var/lib/kubelet/pods/4ce4b66d-d848-43ec-96cb-3f593709964a/volumes" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.497831 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8189a688-7287-4e52-97b3-0acfd5107516" path="/var/lib/kubelet/pods/8189a688-7287-4e52-97b3-0acfd5107516/volumes" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.498316 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cae5ca-eba1-4699-a9f1-cb42c8266469" path="/var/lib/kubelet/pods/c3cae5ca-eba1-4699-a9f1-cb42c8266469/volumes" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.538088 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlzn\" (UniqueName: \"kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.538138 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.539253 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.557370 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlzn\" (UniqueName: \"kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn\") pod \"watcher-b97b-account-create-update-7dmh6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.576312 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:23 crc kubenswrapper[5023]: I0219 08:27:23.685190 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.079635 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-f2lv4"] Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.256013 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6"] Feb 19 08:27:24 crc kubenswrapper[5023]: W0219 08:27:24.261669 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65fb2113_0a57_4fd5_94f5_e5e6c624e6f6.slice/crio-43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d WatchSource:0}: Error finding container 43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d: Status 404 returned error can't find the container with id 43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.669732 5023 generic.go:334] "Generic (PLEG): container finished" podID="692d519a-e654-44ec-aff8-0d3dc630d5cf" containerID="fa473ab2a333fe0b734e89e18c488362343404813d0473f26c9811b1b2af5fc1" exitCode=0 Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.669801 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-f2lv4" event={"ID":"692d519a-e654-44ec-aff8-0d3dc630d5cf","Type":"ContainerDied","Data":"fa473ab2a333fe0b734e89e18c488362343404813d0473f26c9811b1b2af5fc1"} Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.669825 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-f2lv4" event={"ID":"692d519a-e654-44ec-aff8-0d3dc630d5cf","Type":"ContainerStarted","Data":"f8a6d4b8d208fa08a156f69866b7ccf460c98b6307bf0e72ddc4eb1b68e40bcc"} Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.672809 5023 generic.go:334] "Generic (PLEG): container finished" podID="65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" containerID="1c3a8891db4cd46509c7f3c80a4048b9e263f3ebfead53f37d3e08bf0d4e04e4" exitCode=0 Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.672889 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" event={"ID":"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6","Type":"ContainerDied","Data":"1c3a8891db4cd46509c7f3c80a4048b9e263f3ebfead53f37d3e08bf0d4e04e4"} Feb 19 08:27:24 crc kubenswrapper[5023]: I0219 08:27:24.673123 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" event={"ID":"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6","Type":"ContainerStarted","Data":"43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d"} Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.139888 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.145529 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.201501 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw57g\" (UniqueName: \"kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g\") pod \"692d519a-e654-44ec-aff8-0d3dc630d5cf\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.201580 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts\") pod \"692d519a-e654-44ec-aff8-0d3dc630d5cf\" (UID: \"692d519a-e654-44ec-aff8-0d3dc630d5cf\") " Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.202449 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "692d519a-e654-44ec-aff8-0d3dc630d5cf" (UID: "692d519a-e654-44ec-aff8-0d3dc630d5cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.221096 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g" (OuterVolumeSpecName: "kube-api-access-tw57g") pod "692d519a-e654-44ec-aff8-0d3dc630d5cf" (UID: "692d519a-e654-44ec-aff8-0d3dc630d5cf"). InnerVolumeSpecName "kube-api-access-tw57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.302753 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whlzn\" (UniqueName: \"kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn\") pod \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.303036 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts\") pod \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\" (UID: \"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6\") " Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.303425 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw57g\" (UniqueName: \"kubernetes.io/projected/692d519a-e654-44ec-aff8-0d3dc630d5cf-kube-api-access-tw57g\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.303442 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692d519a-e654-44ec-aff8-0d3dc630d5cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.303508 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" (UID: "65fb2113-0a57-4fd5-94f5-e5e6c624e6f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.306118 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn" (OuterVolumeSpecName: "kube-api-access-whlzn") pod "65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" (UID: "65fb2113-0a57-4fd5-94f5-e5e6c624e6f6"). InnerVolumeSpecName "kube-api-access-whlzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.405021 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.405345 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whlzn\" (UniqueName: \"kubernetes.io/projected/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6-kube-api-access-whlzn\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.690657 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-f2lv4" event={"ID":"692d519a-e654-44ec-aff8-0d3dc630d5cf","Type":"ContainerDied","Data":"f8a6d4b8d208fa08a156f69866b7ccf460c98b6307bf0e72ddc4eb1b68e40bcc"} Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.690699 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-f2lv4" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.690710 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8a6d4b8d208fa08a156f69866b7ccf460c98b6307bf0e72ddc4eb1b68e40bcc" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.692381 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" event={"ID":"65fb2113-0a57-4fd5-94f5-e5e6c624e6f6","Type":"ContainerDied","Data":"43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d"} Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.692404 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d135ab9b7238d89c0df36fa34e4f8dd3b90b225050d6bf2dc0a9813d476d0d" Feb 19 08:27:26 crc kubenswrapper[5023]: I0219 08:27:26.692458 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.704220 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg"] Feb 19 08:27:28 crc kubenswrapper[5023]: E0219 08:27:28.705213 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692d519a-e654-44ec-aff8-0d3dc630d5cf" containerName="mariadb-database-create" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.705230 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="692d519a-e654-44ec-aff8-0d3dc630d5cf" containerName="mariadb-database-create" Feb 19 08:27:28 crc kubenswrapper[5023]: E0219 08:27:28.705268 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" containerName="mariadb-account-create-update" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.705276 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" containerName="mariadb-account-create-update" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.705439 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" containerName="mariadb-account-create-update" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.705464 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="692d519a-e654-44ec-aff8-0d3dc630d5cf" containerName="mariadb-database-create" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.706237 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.709307 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-58s9c" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.709645 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.720381 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg"] Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.847415 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.847492 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.847517 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh92m\" (UniqueName: \"kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.847542 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.949519 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.949606 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.949647 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh92m\" (UniqueName: \"kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.949687 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.955449 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.956307 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.961461 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:28 crc kubenswrapper[5023]: I0219 08:27:28.968429 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh92m\" (UniqueName: \"kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m\") pod \"watcher-kuttl-db-sync-n5pqg\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.028662 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.181283 5023 scope.go:117] "RemoveContainer" containerID="aa539d8ae370b06bce71d1638a64a6d4fefc06e4f716f1c53cbb6346fe82ecfb" Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.206035 5023 scope.go:117] "RemoveContainer" containerID="1ca6c1993a5683d8a5908d428f4943a4dd6c76d84bfd17392c518c29d9e7c4a0" Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.535522 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg"] Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.735123 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" event={"ID":"ae1e913a-3002-4c09-817a-306d396ff4b7","Type":"ContainerStarted","Data":"460da3be2bde283181b52ea782bfe5f4793ee856e07736c0968cdc3a52d4313d"} Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.735182 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" event={"ID":"ae1e913a-3002-4c09-817a-306d396ff4b7","Type":"ContainerStarted","Data":"c7ec61ac113272e7bcc948a92e09d9eb8d30c25d3c7fb3ed410a3736893a3115"} Feb 19 08:27:29 crc kubenswrapper[5023]: I0219 08:27:29.752366 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" podStartSLOduration=1.752334597 podStartE2EDuration="1.752334597s" podCreationTimestamp="2026-02-19 08:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:29.75094298 +0000 UTC m=+1607.408061928" watchObservedRunningTime="2026-02-19 08:27:29.752334597 +0000 UTC m=+1607.409453555" Feb 19 08:27:32 crc kubenswrapper[5023]: I0219 08:27:32.763679 5023 generic.go:334] "Generic (PLEG): container finished" podID="ae1e913a-3002-4c09-817a-306d396ff4b7" containerID="460da3be2bde283181b52ea782bfe5f4793ee856e07736c0968cdc3a52d4313d" exitCode=0 Feb 19 08:27:32 crc kubenswrapper[5023]: I0219 08:27:32.763794 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" event={"ID":"ae1e913a-3002-4c09-817a-306d396ff4b7","Type":"ContainerDied","Data":"460da3be2bde283181b52ea782bfe5f4793ee856e07736c0968cdc3a52d4313d"} Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.109528 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.249066 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle\") pod \"ae1e913a-3002-4c09-817a-306d396ff4b7\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.249119 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data\") pod \"ae1e913a-3002-4c09-817a-306d396ff4b7\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.249180 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data\") pod \"ae1e913a-3002-4c09-817a-306d396ff4b7\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.249220 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh92m\" (UniqueName: \"kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m\") pod \"ae1e913a-3002-4c09-817a-306d396ff4b7\" (UID: \"ae1e913a-3002-4c09-817a-306d396ff4b7\") " Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.254878 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ae1e913a-3002-4c09-817a-306d396ff4b7" (UID: "ae1e913a-3002-4c09-817a-306d396ff4b7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.265052 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m" (OuterVolumeSpecName: "kube-api-access-zh92m") pod "ae1e913a-3002-4c09-817a-306d396ff4b7" (UID: "ae1e913a-3002-4c09-817a-306d396ff4b7"). InnerVolumeSpecName "kube-api-access-zh92m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.273726 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae1e913a-3002-4c09-817a-306d396ff4b7" (UID: "ae1e913a-3002-4c09-817a-306d396ff4b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.293194 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data" (OuterVolumeSpecName: "config-data") pod "ae1e913a-3002-4c09-817a-306d396ff4b7" (UID: "ae1e913a-3002-4c09-817a-306d396ff4b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.350580 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.350609 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.350638 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ae1e913a-3002-4c09-817a-306d396ff4b7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.350651 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh92m\" (UniqueName: \"kubernetes.io/projected/ae1e913a-3002-4c09-817a-306d396ff4b7-kube-api-access-zh92m\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.784801 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" event={"ID":"ae1e913a-3002-4c09-817a-306d396ff4b7","Type":"ContainerDied","Data":"c7ec61ac113272e7bcc948a92e09d9eb8d30c25d3c7fb3ed410a3736893a3115"} Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.785229 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ec61ac113272e7bcc948a92e09d9eb8d30c25d3c7fb3ed410a3736893a3115" Feb 19 08:27:34 crc kubenswrapper[5023]: I0219 08:27:34.784862 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.080306 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: E0219 08:27:35.081022 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1e913a-3002-4c09-817a-306d396ff4b7" containerName="watcher-kuttl-db-sync" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.081211 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1e913a-3002-4c09-817a-306d396ff4b7" containerName="watcher-kuttl-db-sync" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.081457 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1e913a-3002-4c09-817a-306d396ff4b7" containerName="watcher-kuttl-db-sync" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.082341 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.097028 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-58s9c" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.099888 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.105840 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.162405 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.162503 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.162800 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.162903 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.163027 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.163060 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.170374 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.171807 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.174851 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.189789 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.267671 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.267866 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.267944 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268003 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268034 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268068 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvcnw\" (UniqueName: \"kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268114 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268160 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268241 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268263 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.268295 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.270580 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.278694 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.281014 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.282850 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.286218 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.287742 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.290087 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.290537 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.304465 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.325412 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369711 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369773 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxj2h\" (UniqueName: \"kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369808 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369828 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369854 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369887 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369906 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369927 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvcnw\" (UniqueName: \"kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369948 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369967 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.369984 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.388599 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.389737 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.392468 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.392994 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.394793 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvcnw\" (UniqueName: \"kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw\") pod \"watcher-kuttl-applier-0\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.401658 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471218 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxj2h\" (UniqueName: \"kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471520 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471601 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471732 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471804 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.471873 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.472453 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.481297 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.481383 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.482270 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.482391 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.491062 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.495104 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxj2h\" (UniqueName: \"kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.787609 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:35 crc kubenswrapper[5023]: I0219 08:27:35.934981 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.020634 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:36 crc kubenswrapper[5023]: W0219 08:27:36.035442 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef77c72_fcfc_4639_b502_c5c4f5637206.slice/crio-fd16e398a492a25ac260c1e1169b657f2c53fbd7b8264d41dfd50cde197dcece WatchSource:0}: Error finding container fd16e398a492a25ac260c1e1169b657f2c53fbd7b8264d41dfd50cde197dcece: Status 404 returned error can't find the container with id fd16e398a492a25ac260c1e1169b657f2c53fbd7b8264d41dfd50cde197dcece Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.423477 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:36 crc kubenswrapper[5023]: W0219 08:27:36.433864 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02cd14e6_a4eb_42b5_89af_3694dac5b74f.slice/crio-47d07a981e6a402ce9016a82ed6274f0c2b5abb3b660be1b38d10e555140fd6b WatchSource:0}: Error finding container 47d07a981e6a402ce9016a82ed6274f0c2b5abb3b660be1b38d10e555140fd6b: Status 404 returned error can't find the container with id 47d07a981e6a402ce9016a82ed6274f0c2b5abb3b660be1b38d10e555140fd6b Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.821170 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerStarted","Data":"dd09a79668cf2b0b8cb68a21f0b224fed21ca57c397253ac34e9b7d720da5551"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.821597 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.821612 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerStarted","Data":"32b3c39dd947cb49c6282fce695f830fa80ea694759209dee4a48d02d235e5f3"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.821639 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerStarted","Data":"0304c4c890d2dc3b1bac264fc9b3f4f12777155f2531c448a3d7291d09614308"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.823987 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"0ef77c72-fcfc-4639-b502-c5c4f5637206","Type":"ContainerStarted","Data":"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.824042 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"0ef77c72-fcfc-4639-b502-c5c4f5637206","Type":"ContainerStarted","Data":"fd16e398a492a25ac260c1e1169b657f2c53fbd7b8264d41dfd50cde197dcece"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.826305 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02cd14e6-a4eb-42b5-89af-3694dac5b74f","Type":"ContainerStarted","Data":"d0f72cc43354fbce6c5996f038f67f2b2bbd52fbe366efde47cf70a50cc6f1c3"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.826542 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02cd14e6-a4eb-42b5-89af-3694dac5b74f","Type":"ContainerStarted","Data":"47d07a981e6a402ce9016a82ed6274f0c2b5abb3b660be1b38d10e555140fd6b"} Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.845851 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.845836531 podStartE2EDuration="1.845836531s" podCreationTimestamp="2026-02-19 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:36.837840369 +0000 UTC m=+1614.494959317" watchObservedRunningTime="2026-02-19 08:27:36.845836531 +0000 UTC m=+1614.502955479" Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.863382 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.863364696 podStartE2EDuration="1.863364696s" podCreationTimestamp="2026-02-19 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:36.855397035 +0000 UTC m=+1614.512515983" watchObservedRunningTime="2026-02-19 08:27:36.863364696 +0000 UTC m=+1614.520483644" Feb 19 08:27:36 crc kubenswrapper[5023]: I0219 08:27:36.883336 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.8833178259999999 podStartE2EDuration="1.883317826s" podCreationTimestamp="2026-02-19 08:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:27:36.87780496 +0000 UTC m=+1614.534923898" watchObservedRunningTime="2026-02-19 08:27:36.883317826 +0000 UTC m=+1614.540436774" Feb 19 08:27:39 crc kubenswrapper[5023]: I0219 08:27:39.070845 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:40 crc kubenswrapper[5023]: I0219 08:27:40.401841 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:40 crc kubenswrapper[5023]: I0219 08:27:40.491680 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:41 crc kubenswrapper[5023]: I0219 08:27:41.869783 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:27:41 crc kubenswrapper[5023]: I0219 08:27:41.870182 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.402803 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.411557 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.494957 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.518975 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.788290 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.814514 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.902903 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.907420 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.928119 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:45 crc kubenswrapper[5023]: I0219 08:27:45.932144 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:46 crc kubenswrapper[5023]: I0219 08:27:46.955033 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 08:27:51 crc kubenswrapper[5023]: E0219 08:27:51.884044 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7098b37_5e49_4763_a788_910722be2533.slice/crio-conmon-2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:27:51 crc kubenswrapper[5023]: I0219 08:27:51.959599 5023 generic.go:334] "Generic (PLEG): container finished" podID="f7098b37-5e49-4763-a788-910722be2533" containerID="2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db" exitCode=137 Feb 19 08:27:51 crc kubenswrapper[5023]: I0219 08:27:51.960144 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerDied","Data":"2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db"} Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.134411 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180193 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180260 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhrp\" (UniqueName: \"kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180333 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180430 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180472 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180502 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180542 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.180580 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle\") pod \"f7098b37-5e49-4763-a788-910722be2533\" (UID: \"f7098b37-5e49-4763-a788-910722be2533\") " Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.181362 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.182515 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.190342 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp" (OuterVolumeSpecName: "kube-api-access-skhrp") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "kube-api-access-skhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.198147 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts" (OuterVolumeSpecName: "scripts") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.218017 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.249098 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.265521 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data" (OuterVolumeSpecName: "config-data") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285765 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285819 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhrp\" (UniqueName: \"kubernetes.io/projected/f7098b37-5e49-4763-a788-910722be2533-kube-api-access-skhrp\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285835 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285848 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285856 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7098b37-5e49-4763-a788-910722be2533-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285864 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.285873 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.293777 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7098b37-5e49-4763-a788-910722be2533" (UID: "f7098b37-5e49-4763-a788-910722be2533"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.387443 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7098b37-5e49-4763-a788-910722be2533-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.970770 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f7098b37-5e49-4763-a788-910722be2533","Type":"ContainerDied","Data":"e6b509bba1b7ed651579dd76ca75010f97a1a08c57fee63dded86fc012193cf9"} Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.970844 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:52 crc kubenswrapper[5023]: I0219 08:27:52.971127 5023 scope.go:117] "RemoveContainer" containerID="2e3ae5fdbd949753496dda8fe6b366c038b1619df6e8d5a4c4fde7a9c98514db" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.001928 5023 scope.go:117] "RemoveContainer" containerID="25b5616ad4f221d4a96e691ba73003337695d23cf6dcb7746df84354dda39c6c" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.005556 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.011435 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.030472 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:53 crc kubenswrapper[5023]: E0219 08:27:53.030829 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-notification-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.030846 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-notification-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: E0219 08:27:53.030866 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="sg-core" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.030873 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="sg-core" Feb 19 08:27:53 crc kubenswrapper[5023]: E0219 08:27:53.030906 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="proxy-httpd" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.030912 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="proxy-httpd" Feb 19 08:27:53 crc kubenswrapper[5023]: E0219 08:27:53.030923 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-central-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.030929 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-central-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.031070 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-central-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.031085 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="proxy-httpd" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.031093 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="sg-core" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.031104 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7098b37-5e49-4763-a788-910722be2533" containerName="ceilometer-notification-agent" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.032583 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.040054 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.040336 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.050962 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.062816 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.062951 5023 scope.go:117] "RemoveContainer" containerID="3a6aa52e63a2e6fb42d8c3de8db42703e7d5fb2d9814a258067f177ecbd8e5c6" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.093764 5023 scope.go:117] "RemoveContainer" containerID="1bc20a546ce85bd370faaa3d2bc714e9e7569ac0bc548f6a0a483419d059f6e2" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130685 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130738 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130776 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130806 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130833 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.130849 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.131147 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnck7\" (UniqueName: \"kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.131299 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232483 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232540 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232571 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232592 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232653 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnck7\" (UniqueName: \"kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232691 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232726 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.232745 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.233199 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.233285 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.237265 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.237422 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.238166 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.249750 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnck7\" (UniqueName: \"kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.250035 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.254789 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") pod \"ceilometer-0\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.360261 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.500950 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7098b37-5e49-4763-a788-910722be2533" path="/var/lib/kubelet/pods/f7098b37-5e49-4763-a788-910722be2533/volumes" Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.837855 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:53 crc kubenswrapper[5023]: W0219 08:27:53.843227 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee7695b1_3519_4641_9c6c_efeb72590155.slice/crio-c12d49f1ebed5a136def37b433bd35abf1f2102d09b91cf747e68f7b53db554a WatchSource:0}: Error finding container c12d49f1ebed5a136def37b433bd35abf1f2102d09b91cf747e68f7b53db554a: Status 404 returned error can't find the container with id c12d49f1ebed5a136def37b433bd35abf1f2102d09b91cf747e68f7b53db554a Feb 19 08:27:53 crc kubenswrapper[5023]: I0219 08:27:53.979424 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerStarted","Data":"c12d49f1ebed5a136def37b433bd35abf1f2102d09b91cf747e68f7b53db554a"} Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.208756 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.225369 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-n5pqg"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.294963 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.295525 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerName="watcher-applier" containerID="cri-o://0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" gracePeriod=30 Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.306488 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherb97b-account-delete-865fb"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.307570 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.321683 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb97b-account-delete-865fb"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.437790 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.438040 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-kuttl-api-log" containerID="cri-o://32b3c39dd947cb49c6282fce695f830fa80ea694759209dee4a48d02d235e5f3" gracePeriod=30 Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.438171 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-api" containerID="cri-o://dd09a79668cf2b0b8cb68a21f0b224fed21ca57c397253ac34e9b7d720da5551" gracePeriod=30 Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.458809 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.458901 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwlf\" (UniqueName: \"kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.517207 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.517448 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" containerName="watcher-decision-engine" containerID="cri-o://d0f72cc43354fbce6c5996f038f67f2b2bbd52fbe366efde47cf70a50cc6f1c3" gracePeriod=30 Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.560569 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.560670 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwlf\" (UniqueName: \"kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.562059 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.577514 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwlf\" (UniqueName: \"kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf\") pod \"watcherb97b-account-delete-865fb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.646473 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.992853 5023 generic.go:334] "Generic (PLEG): container finished" podID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerID="32b3c39dd947cb49c6282fce695f830fa80ea694759209dee4a48d02d235e5f3" exitCode=143 Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.992924 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerDied","Data":"32b3c39dd947cb49c6282fce695f830fa80ea694759209dee4a48d02d235e5f3"} Feb 19 08:27:54 crc kubenswrapper[5023]: I0219 08:27:54.995429 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerStarted","Data":"b690764de2f85c6d812db01c60728749823cff968709686350762ffeb0df2781"} Feb 19 08:27:55 crc kubenswrapper[5023]: I0219 08:27:55.170314 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb97b-account-delete-865fb"] Feb 19 08:27:55 crc kubenswrapper[5023]: I0219 08:27:55.486560 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1e913a-3002-4c09-817a-306d396ff4b7" path="/var/lib/kubelet/pods/ae1e913a-3002-4c09-817a-306d396ff4b7/volumes" Feb 19 08:27:55 crc kubenswrapper[5023]: E0219 08:27:55.500320 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:55 crc kubenswrapper[5023]: E0219 08:27:55.501875 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:55 crc kubenswrapper[5023]: E0219 08:27:55.503009 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:27:55 crc kubenswrapper[5023]: E0219 08:27:55.503052 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerName="watcher-applier" Feb 19 08:27:55 crc kubenswrapper[5023]: I0219 08:27:55.620402 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.205:9322/\": read tcp 10.217.0.2:46032->10.217.0.205:9322: read: connection reset by peer" Feb 19 08:27:55 crc kubenswrapper[5023]: I0219 08:27:55.621040 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.205:9322/\": read tcp 10.217.0.2:46034->10.217.0.205:9322: read: connection reset by peer" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.005952 5023 generic.go:334] "Generic (PLEG): container finished" podID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerID="dd09a79668cf2b0b8cb68a21f0b224fed21ca57c397253ac34e9b7d720da5551" exitCode=0 Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.006060 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerDied","Data":"dd09a79668cf2b0b8cb68a21f0b224fed21ca57c397253ac34e9b7d720da5551"} Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.006164 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a6692ddc-e41e-45ad-b679-b07b172725ee","Type":"ContainerDied","Data":"0304c4c890d2dc3b1bac264fc9b3f4f12777155f2531c448a3d7291d09614308"} Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.006188 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0304c4c890d2dc3b1bac264fc9b3f4f12777155f2531c448a3d7291d09614308" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.016673 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerStarted","Data":"b01f82dfbfb3a70cfb6209fae7669bdb687294618f7d14989fa2090408152ef8"} Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.022425 5023 generic.go:334] "Generic (PLEG): container finished" podID="94e939bc-718c-4fb0-a985-7f353d581efb" containerID="5eb23a973606a101b80aa82eacd73943737a6e06469d50aae3d51d3770e76166" exitCode=0 Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.022505 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" event={"ID":"94e939bc-718c-4fb0-a985-7f353d581efb","Type":"ContainerDied","Data":"5eb23a973606a101b80aa82eacd73943737a6e06469d50aae3d51d3770e76166"} Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.022558 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" event={"ID":"94e939bc-718c-4fb0-a985-7f353d581efb","Type":"ContainerStarted","Data":"6f49b69a3b6ffdbd341437fdb3977ea797943824f7555d783680569aa19120d8"} Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.087873 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190319 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190472 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190572 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190684 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190724 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.190753 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca\") pod \"a6692ddc-e41e-45ad-b679-b07b172725ee\" (UID: \"a6692ddc-e41e-45ad-b679-b07b172725ee\") " Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.193272 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs" (OuterVolumeSpecName: "logs") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.214298 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq" (OuterVolumeSpecName: "kube-api-access-nmfdq") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "kube-api-access-nmfdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.238612 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.240692 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.255225 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data" (OuterVolumeSpecName: "config-data") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.293650 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/a6692ddc-e41e-45ad-b679-b07b172725ee-kube-api-access-nmfdq\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.293683 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.293693 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.293710 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.293718 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6692ddc-e41e-45ad-b679-b07b172725ee-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.296337 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a6692ddc-e41e-45ad-b679-b07b172725ee" (UID: "a6692ddc-e41e-45ad-b679-b07b172725ee"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:56 crc kubenswrapper[5023]: I0219 08:27:56.394942 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a6692ddc-e41e-45ad-b679-b07b172725ee-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.034888 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.049721 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerStarted","Data":"399703c5659b0e0b320bbbe74a044fcac97675d869c6ac52d88572c43f5a6741"} Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.088612 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.101454 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.492706 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" path="/var/lib/kubelet/pods/a6692ddc-e41e-45ad-b679-b07b172725ee/volumes" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.617667 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.720261 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.722596 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctwlf\" (UniqueName: \"kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf\") pod \"94e939bc-718c-4fb0-a985-7f353d581efb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.722718 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts\") pod \"94e939bc-718c-4fb0-a985-7f353d581efb\" (UID: \"94e939bc-718c-4fb0-a985-7f353d581efb\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.723999 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94e939bc-718c-4fb0-a985-7f353d581efb" (UID: "94e939bc-718c-4fb0-a985-7f353d581efb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.728317 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf" (OuterVolumeSpecName: "kube-api-access-ctwlf") pod "94e939bc-718c-4fb0-a985-7f353d581efb" (UID: "94e939bc-718c-4fb0-a985-7f353d581efb"). InnerVolumeSpecName "kube-api-access-ctwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824057 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs\") pod \"0ef77c72-fcfc-4639-b502-c5c4f5637206\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824123 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle\") pod \"0ef77c72-fcfc-4639-b502-c5c4f5637206\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824212 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvcnw\" (UniqueName: \"kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw\") pod \"0ef77c72-fcfc-4639-b502-c5c4f5637206\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824297 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data\") pod \"0ef77c72-fcfc-4639-b502-c5c4f5637206\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824375 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls\") pod \"0ef77c72-fcfc-4639-b502-c5c4f5637206\" (UID: \"0ef77c72-fcfc-4639-b502-c5c4f5637206\") " Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824696 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94e939bc-718c-4fb0-a985-7f353d581efb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.824711 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctwlf\" (UniqueName: \"kubernetes.io/projected/94e939bc-718c-4fb0-a985-7f353d581efb-kube-api-access-ctwlf\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.828159 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs" (OuterVolumeSpecName: "logs") pod "0ef77c72-fcfc-4639-b502-c5c4f5637206" (UID: "0ef77c72-fcfc-4639-b502-c5c4f5637206"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.836914 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw" (OuterVolumeSpecName: "kube-api-access-kvcnw") pod "0ef77c72-fcfc-4639-b502-c5c4f5637206" (UID: "0ef77c72-fcfc-4639-b502-c5c4f5637206"). InnerVolumeSpecName "kube-api-access-kvcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.858574 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ef77c72-fcfc-4639-b502-c5c4f5637206" (UID: "0ef77c72-fcfc-4639-b502-c5c4f5637206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.901769 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data" (OuterVolumeSpecName: "config-data") pod "0ef77c72-fcfc-4639-b502-c5c4f5637206" (UID: "0ef77c72-fcfc-4639-b502-c5c4f5637206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.902367 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "0ef77c72-fcfc-4639-b502-c5c4f5637206" (UID: "0ef77c72-fcfc-4639-b502-c5c4f5637206"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.926943 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.926988 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.927001 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef77c72-fcfc-4639-b502-c5c4f5637206-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.927011 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef77c72-fcfc-4639-b502-c5c4f5637206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.927025 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvcnw\" (UniqueName: \"kubernetes.io/projected/0ef77c72-fcfc-4639-b502-c5c4f5637206-kube-api-access-kvcnw\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:57 crc kubenswrapper[5023]: I0219 08:27:57.934181 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.046266 5023 generic.go:334] "Generic (PLEG): container finished" podID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" exitCode=0 Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.046329 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.046348 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"0ef77c72-fcfc-4639-b502-c5c4f5637206","Type":"ContainerDied","Data":"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726"} Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.046383 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"0ef77c72-fcfc-4639-b502-c5c4f5637206","Type":"ContainerDied","Data":"fd16e398a492a25ac260c1e1169b657f2c53fbd7b8264d41dfd50cde197dcece"} Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.046401 5023 scope.go:117] "RemoveContainer" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050025 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerStarted","Data":"25eba656ac038bafdb4e0365e27489f0556e184231ad6894c2670bcfe83893f2"} Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050167 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-central-agent" containerID="cri-o://b690764de2f85c6d812db01c60728749823cff968709686350762ffeb0df2781" gracePeriod=30 Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050393 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050432 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="proxy-httpd" containerID="cri-o://25eba656ac038bafdb4e0365e27489f0556e184231ad6894c2670bcfe83893f2" gracePeriod=30 Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050471 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="sg-core" containerID="cri-o://399703c5659b0e0b320bbbe74a044fcac97675d869c6ac52d88572c43f5a6741" gracePeriod=30 Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.050508 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-notification-agent" containerID="cri-o://b01f82dfbfb3a70cfb6209fae7669bdb687294618f7d14989fa2090408152ef8" gracePeriod=30 Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.056952 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" event={"ID":"94e939bc-718c-4fb0-a985-7f353d581efb","Type":"ContainerDied","Data":"6f49b69a3b6ffdbd341437fdb3977ea797943824f7555d783680569aa19120d8"} Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.056997 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f49b69a3b6ffdbd341437fdb3977ea797943824f7555d783680569aa19120d8" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.057004 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb97b-account-delete-865fb" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.085997 5023 scope.go:117] "RemoveContainer" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" Feb 19 08:27:58 crc kubenswrapper[5023]: E0219 08:27:58.090365 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726\": container with ID starting with 0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726 not found: ID does not exist" containerID="0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.090414 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726"} err="failed to get container status \"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726\": rpc error: code = NotFound desc = could not find container \"0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726\": container with ID starting with 0939637ca92bb8969dd5c9bdb78db2709e9740907e6e497493dbab6cfc6a7726 not found: ID does not exist" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.096263 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.184691695 podStartE2EDuration="5.096238393s" podCreationTimestamp="2026-02-19 08:27:53 +0000 UTC" firstStartedPulling="2026-02-19 08:27:53.845953861 +0000 UTC m=+1631.503072809" lastFinishedPulling="2026-02-19 08:27:57.757500569 +0000 UTC m=+1635.414619507" observedRunningTime="2026-02-19 08:27:58.081993865 +0000 UTC m=+1635.739112823" watchObservedRunningTime="2026-02-19 08:27:58.096238393 +0000 UTC m=+1635.753357351" Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.111252 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:58 crc kubenswrapper[5023]: I0219 08:27:58.118655 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.074779 5023 generic.go:334] "Generic (PLEG): container finished" podID="ee7695b1-3519-4641-9c6c-efeb72590155" containerID="399703c5659b0e0b320bbbe74a044fcac97675d869c6ac52d88572c43f5a6741" exitCode=2 Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.075539 5023 generic.go:334] "Generic (PLEG): container finished" podID="ee7695b1-3519-4641-9c6c-efeb72590155" containerID="b01f82dfbfb3a70cfb6209fae7669bdb687294618f7d14989fa2090408152ef8" exitCode=0 Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.075686 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerDied","Data":"399703c5659b0e0b320bbbe74a044fcac97675d869c6ac52d88572c43f5a6741"} Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.075719 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerDied","Data":"b01f82dfbfb3a70cfb6209fae7669bdb687294618f7d14989fa2090408152ef8"} Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.076919 5023 generic.go:334] "Generic (PLEG): container finished" podID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" containerID="d0f72cc43354fbce6c5996f038f67f2b2bbd52fbe366efde47cf70a50cc6f1c3" exitCode=0 Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.076944 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02cd14e6-a4eb-42b5-89af-3694dac5b74f","Type":"ContainerDied","Data":"d0f72cc43354fbce6c5996f038f67f2b2bbd52fbe366efde47cf70a50cc6f1c3"} Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.151004 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245517 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245610 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxj2h\" (UniqueName: \"kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245743 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245819 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245861 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.245964 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls\") pod \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\" (UID: \"02cd14e6-a4eb-42b5-89af-3694dac5b74f\") " Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.247074 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs" (OuterVolumeSpecName: "logs") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.270836 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h" (OuterVolumeSpecName: "kube-api-access-wxj2h") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "kube-api-access-wxj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.282961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.317925 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.331721 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-f2lv4"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.332429 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data" (OuterVolumeSpecName: "config-data") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.339269 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-f2lv4"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.347592 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.347757 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02cd14e6-a4eb-42b5-89af-3694dac5b74f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.347818 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.347872 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.347940 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxj2h\" (UniqueName: \"kubernetes.io/projected/02cd14e6-a4eb-42b5-89af-3694dac5b74f-kube-api-access-wxj2h\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.349810 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "02cd14e6-a4eb-42b5-89af-3694dac5b74f" (UID: "02cd14e6-a4eb-42b5-89af-3694dac5b74f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.350701 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.358485 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-b97b-account-create-update-7dmh6"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.370920 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherb97b-account-delete-865fb"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.377426 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherb97b-account-delete-865fb"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.415474 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-lbwh8"] Feb 19 08:27:59 crc kubenswrapper[5023]: E0219 08:27:59.419273 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerName="watcher-applier" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419304 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerName="watcher-applier" Feb 19 08:27:59 crc kubenswrapper[5023]: E0219 08:27:59.419331 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-api" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419339 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-api" Feb 19 08:27:59 crc kubenswrapper[5023]: E0219 08:27:59.419351 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e939bc-718c-4fb0-a985-7f353d581efb" containerName="mariadb-account-delete" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419359 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e939bc-718c-4fb0-a985-7f353d581efb" containerName="mariadb-account-delete" Feb 19 08:27:59 crc kubenswrapper[5023]: E0219 08:27:59.419372 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-kuttl-api-log" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419380 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-kuttl-api-log" Feb 19 08:27:59 crc kubenswrapper[5023]: E0219 08:27:59.419390 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" containerName="watcher-decision-engine" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419397 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" containerName="watcher-decision-engine" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419582 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" containerName="watcher-decision-engine" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419599 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" containerName="watcher-applier" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419609 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-api" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419640 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6692ddc-e41e-45ad-b679-b07b172725ee" containerName="watcher-kuttl-api-log" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.419652 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e939bc-718c-4fb0-a985-7f353d581efb" containerName="mariadb-account-delete" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.420346 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.438550 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lbwh8"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.448929 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5rs\" (UniqueName: \"kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.449175 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.449654 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/02cd14e6-a4eb-42b5-89af-3694dac5b74f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.486528 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef77c72-fcfc-4639-b502-c5c4f5637206" path="/var/lib/kubelet/pods/0ef77c72-fcfc-4639-b502-c5c4f5637206/volumes" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.487490 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fb2113-0a57-4fd5-94f5-e5e6c624e6f6" path="/var/lib/kubelet/pods/65fb2113-0a57-4fd5-94f5-e5e6c624e6f6/volumes" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.488149 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692d519a-e654-44ec-aff8-0d3dc630d5cf" path="/var/lib/kubelet/pods/692d519a-e654-44ec-aff8-0d3dc630d5cf/volumes" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.489339 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e939bc-718c-4fb0-a985-7f353d581efb" path="/var/lib/kubelet/pods/94e939bc-718c-4fb0-a985-7f353d581efb/volumes" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.528889 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-4ldwg"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.529958 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.535534 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.542588 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-4ldwg"] Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.551243 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq5rs\" (UniqueName: \"kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.551299 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.552170 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.568156 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq5rs\" (UniqueName: \"kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs\") pod \"watcher-db-create-lbwh8\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.655442 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d6d5\" (UniqueName: \"kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.655605 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.739819 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.756970 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d6d5\" (UniqueName: \"kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.757243 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.757891 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.773031 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d6d5\" (UniqueName: \"kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5\") pod \"watcher-test-account-create-update-4ldwg\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:27:59 crc kubenswrapper[5023]: I0219 08:27:59.846721 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.089277 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"02cd14e6-a4eb-42b5-89af-3694dac5b74f","Type":"ContainerDied","Data":"47d07a981e6a402ce9016a82ed6274f0c2b5abb3b660be1b38d10e555140fd6b"} Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.089521 5023 scope.go:117] "RemoveContainer" containerID="d0f72cc43354fbce6c5996f038f67f2b2bbd52fbe366efde47cf70a50cc6f1c3" Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.089650 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.101524 5023 generic.go:334] "Generic (PLEG): container finished" podID="ee7695b1-3519-4641-9c6c-efeb72590155" containerID="b690764de2f85c6d812db01c60728749823cff968709686350762ffeb0df2781" exitCode=0 Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.101560 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerDied","Data":"b690764de2f85c6d812db01c60728749823cff968709686350762ffeb0df2781"} Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.126887 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.136826 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.162751 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lbwh8"] Feb 19 08:28:00 crc kubenswrapper[5023]: W0219 08:28:00.173343 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc967911b_7232_46f9_b9dc_98571984b719.slice/crio-f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6 WatchSource:0}: Error finding container f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6: Status 404 returned error can't find the container with id f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6 Feb 19 08:28:00 crc kubenswrapper[5023]: I0219 08:28:00.314676 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-4ldwg"] Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.112069 5023 generic.go:334] "Generic (PLEG): container finished" podID="c967911b-7232-46f9-b9dc-98571984b719" containerID="b4b3dde1c71fc77cfa0ce798bf09feec907fa44aee39260fe24eebfc250874e9" exitCode=0 Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.112174 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lbwh8" event={"ID":"c967911b-7232-46f9-b9dc-98571984b719","Type":"ContainerDied","Data":"b4b3dde1c71fc77cfa0ce798bf09feec907fa44aee39260fe24eebfc250874e9"} Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.112464 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lbwh8" event={"ID":"c967911b-7232-46f9-b9dc-98571984b719","Type":"ContainerStarted","Data":"f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6"} Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.115846 5023 generic.go:334] "Generic (PLEG): container finished" podID="deb71e49-7f8d-4cf5-afd9-95a14a36325e" containerID="7e14e26865ffb668305c05b7d2a3bc099c0054ac9cdb0a6099102dd5c34fc4b5" exitCode=0 Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.115883 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" event={"ID":"deb71e49-7f8d-4cf5-afd9-95a14a36325e","Type":"ContainerDied","Data":"7e14e26865ffb668305c05b7d2a3bc099c0054ac9cdb0a6099102dd5c34fc4b5"} Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.115903 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" event={"ID":"deb71e49-7f8d-4cf5-afd9-95a14a36325e","Type":"ContainerStarted","Data":"df6183366f7f10faf56d3fad237f87206317d44e0b6968b7e8cc920a0b114f9b"} Feb 19 08:28:01 crc kubenswrapper[5023]: I0219 08:28:01.486317 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02cd14e6-a4eb-42b5-89af-3694dac5b74f" path="/var/lib/kubelet/pods/02cd14e6-a4eb-42b5-89af-3694dac5b74f/volumes" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.576827 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.594581 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.598811 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq5rs\" (UniqueName: \"kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs\") pod \"c967911b-7232-46f9-b9dc-98571984b719\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.598871 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts\") pod \"c967911b-7232-46f9-b9dc-98571984b719\" (UID: \"c967911b-7232-46f9-b9dc-98571984b719\") " Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.599607 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c967911b-7232-46f9-b9dc-98571984b719" (UID: "c967911b-7232-46f9-b9dc-98571984b719"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.605382 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs" (OuterVolumeSpecName: "kube-api-access-kq5rs") pod "c967911b-7232-46f9-b9dc-98571984b719" (UID: "c967911b-7232-46f9-b9dc-98571984b719"). InnerVolumeSpecName "kube-api-access-kq5rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.700809 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts\") pod \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.701015 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d6d5\" (UniqueName: \"kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5\") pod \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\" (UID: \"deb71e49-7f8d-4cf5-afd9-95a14a36325e\") " Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.701345 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "deb71e49-7f8d-4cf5-afd9-95a14a36325e" (UID: "deb71e49-7f8d-4cf5-afd9-95a14a36325e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.701502 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq5rs\" (UniqueName: \"kubernetes.io/projected/c967911b-7232-46f9-b9dc-98571984b719-kube-api-access-kq5rs\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.701520 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c967911b-7232-46f9-b9dc-98571984b719-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.701530 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deb71e49-7f8d-4cf5-afd9-95a14a36325e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.704363 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5" (OuterVolumeSpecName: "kube-api-access-4d6d5") pod "deb71e49-7f8d-4cf5-afd9-95a14a36325e" (UID: "deb71e49-7f8d-4cf5-afd9-95a14a36325e"). InnerVolumeSpecName "kube-api-access-4d6d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:02 crc kubenswrapper[5023]: I0219 08:28:02.802460 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d6d5\" (UniqueName: \"kubernetes.io/projected/deb71e49-7f8d-4cf5-afd9-95a14a36325e-kube-api-access-4d6d5\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.136216 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" event={"ID":"deb71e49-7f8d-4cf5-afd9-95a14a36325e","Type":"ContainerDied","Data":"df6183366f7f10faf56d3fad237f87206317d44e0b6968b7e8cc920a0b114f9b"} Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.136248 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-4ldwg" Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.136255 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6183366f7f10faf56d3fad237f87206317d44e0b6968b7e8cc920a0b114f9b" Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.140251 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-lbwh8" event={"ID":"c967911b-7232-46f9-b9dc-98571984b719","Type":"ContainerDied","Data":"f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6"} Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.140299 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f413d74c7bd2d5bc2825f7553755e022351d267aa25f5a6e52264e1a5fe3d9b6" Feb 19 08:28:03 crc kubenswrapper[5023]: I0219 08:28:03.140313 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-lbwh8" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.780445 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9"] Feb 19 08:28:04 crc kubenswrapper[5023]: E0219 08:28:04.781209 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c967911b-7232-46f9-b9dc-98571984b719" containerName="mariadb-database-create" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.781225 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c967911b-7232-46f9-b9dc-98571984b719" containerName="mariadb-database-create" Feb 19 08:28:04 crc kubenswrapper[5023]: E0219 08:28:04.781257 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb71e49-7f8d-4cf5-afd9-95a14a36325e" containerName="mariadb-account-create-update" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.781265 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb71e49-7f8d-4cf5-afd9-95a14a36325e" containerName="mariadb-account-create-update" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.781456 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb71e49-7f8d-4cf5-afd9-95a14a36325e" containerName="mariadb-account-create-update" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.781477 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c967911b-7232-46f9-b9dc-98571984b719" containerName="mariadb-database-create" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.782149 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.785153 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-ch9pm" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.785518 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.792970 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9"] Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.833921 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpfwz\" (UniqueName: \"kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.834032 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.834185 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.834268 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.935917 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.935966 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.935989 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.936041 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpfwz\" (UniqueName: \"kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.941771 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.942219 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.942735 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:04 crc kubenswrapper[5023]: I0219 08:28:04.958399 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpfwz\" (UniqueName: \"kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz\") pod \"watcher-kuttl-db-sync-tdhw9\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:05 crc kubenswrapper[5023]: I0219 08:28:05.099377 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:05 crc kubenswrapper[5023]: I0219 08:28:05.584005 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9"] Feb 19 08:28:06 crc kubenswrapper[5023]: I0219 08:28:06.165145 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" event={"ID":"4f4debfe-d881-4d61-bf04-553f1e641ad7","Type":"ContainerStarted","Data":"10b2a6fbc849751307ba9c4b3b9f5da0e16e6be5b6d16375a4bf887b5370fc98"} Feb 19 08:28:06 crc kubenswrapper[5023]: I0219 08:28:06.165444 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" event={"ID":"4f4debfe-d881-4d61-bf04-553f1e641ad7","Type":"ContainerStarted","Data":"db5f25f65d9fecbc2065209011f5295333b226fe6a0168394fd99fd71649557b"} Feb 19 08:28:06 crc kubenswrapper[5023]: I0219 08:28:06.191730 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" podStartSLOduration=2.19170286 podStartE2EDuration="2.19170286s" podCreationTimestamp="2026-02-19 08:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:28:06.184666183 +0000 UTC m=+1643.841785141" watchObservedRunningTime="2026-02-19 08:28:06.19170286 +0000 UTC m=+1643.848821808" Feb 19 08:28:08 crc kubenswrapper[5023]: I0219 08:28:08.181009 5023 generic.go:334] "Generic (PLEG): container finished" podID="4f4debfe-d881-4d61-bf04-553f1e641ad7" containerID="10b2a6fbc849751307ba9c4b3b9f5da0e16e6be5b6d16375a4bf887b5370fc98" exitCode=0 Feb 19 08:28:08 crc kubenswrapper[5023]: I0219 08:28:08.181152 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" event={"ID":"4f4debfe-d881-4d61-bf04-553f1e641ad7","Type":"ContainerDied","Data":"10b2a6fbc849751307ba9c4b3b9f5da0e16e6be5b6d16375a4bf887b5370fc98"} Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.569067 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.612724 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data\") pod \"4f4debfe-d881-4d61-bf04-553f1e641ad7\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.612832 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpfwz\" (UniqueName: \"kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz\") pod \"4f4debfe-d881-4d61-bf04-553f1e641ad7\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.612880 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data\") pod \"4f4debfe-d881-4d61-bf04-553f1e641ad7\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.613015 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle\") pod \"4f4debfe-d881-4d61-bf04-553f1e641ad7\" (UID: \"4f4debfe-d881-4d61-bf04-553f1e641ad7\") " Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.629798 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4f4debfe-d881-4d61-bf04-553f1e641ad7" (UID: "4f4debfe-d881-4d61-bf04-553f1e641ad7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.633188 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz" (OuterVolumeSpecName: "kube-api-access-tpfwz") pod "4f4debfe-d881-4d61-bf04-553f1e641ad7" (UID: "4f4debfe-d881-4d61-bf04-553f1e641ad7"). InnerVolumeSpecName "kube-api-access-tpfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.640866 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f4debfe-d881-4d61-bf04-553f1e641ad7" (UID: "4f4debfe-d881-4d61-bf04-553f1e641ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.672197 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data" (OuterVolumeSpecName: "config-data") pod "4f4debfe-d881-4d61-bf04-553f1e641ad7" (UID: "4f4debfe-d881-4d61-bf04-553f1e641ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.715558 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.715817 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.715893 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpfwz\" (UniqueName: \"kubernetes.io/projected/4f4debfe-d881-4d61-bf04-553f1e641ad7-kube-api-access-tpfwz\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:09 crc kubenswrapper[5023]: I0219 08:28:09.715958 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4debfe-d881-4d61-bf04-553f1e641ad7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.201591 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" event={"ID":"4f4debfe-d881-4d61-bf04-553f1e641ad7","Type":"ContainerDied","Data":"db5f25f65d9fecbc2065209011f5295333b226fe6a0168394fd99fd71649557b"} Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.201656 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5f25f65d9fecbc2065209011f5295333b226fe6a0168394fd99fd71649557b" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.201679 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.528545 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: E0219 08:28:10.529208 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4debfe-d881-4d61-bf04-553f1e641ad7" containerName="watcher-kuttl-db-sync" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.529281 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4debfe-d881-4d61-bf04-553f1e641ad7" containerName="watcher-kuttl-db-sync" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.529484 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4debfe-d881-4d61-bf04-553f1e641ad7" containerName="watcher-kuttl-db-sync" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.530379 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.533566 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-ch9pm" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.538400 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.547566 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.579741 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.581197 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.597491 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.598527 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.603223 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.613465 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.629750 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630607 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630710 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630733 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630772 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630790 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630810 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630838 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630860 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qrnb\" (UniqueName: \"kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630882 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630907 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4l8\" (UniqueName: \"kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630926 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630945 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmb8\" (UniqueName: \"kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.630966 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.631055 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.631077 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.631100 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.631118 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.631144 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.673707 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.674951 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.677376 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.679449 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732163 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732214 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732243 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732275 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732300 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qrnb\" (UniqueName: \"kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732321 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732350 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4l8\" (UniqueName: \"kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732369 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732388 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxtc\" (UniqueName: \"kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732408 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmb8\" (UniqueName: \"kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732427 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732444 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732473 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732502 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732521 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732548 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732574 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732591 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732609 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732649 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732676 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732703 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.732725 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.733778 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.733977 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.737575 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.738536 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.739571 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.739829 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.742026 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.742322 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.742835 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.744368 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.744560 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.744912 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.744966 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.745123 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.746172 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.749532 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qrnb\" (UniqueName: \"kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb\") pod \"watcher-kuttl-api-1\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.755290 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xmb8\" (UniqueName: \"kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8\") pod \"watcher-kuttl-api-0\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.758809 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4l8\" (UniqueName: \"kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.834828 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxtc\" (UniqueName: \"kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.834897 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.834946 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.834984 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.835010 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.838911 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.839985 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.840319 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.846117 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.846317 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.859799 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxtc\" (UniqueName: \"kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc\") pod \"watcher-kuttl-applier-0\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.897498 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.917911 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:10 crc kubenswrapper[5023]: I0219 08:28:10.992725 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:11 crc kubenswrapper[5023]: W0219 08:28:11.398999 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bc68ef1_4ca4_44ed_80c1_3b657104fc2f.slice/crio-90d9c9456eec8f27869f55a76bc65bd7942b123a04bb43b9b2a705e55f5298c7 WatchSource:0}: Error finding container 90d9c9456eec8f27869f55a76bc65bd7942b123a04bb43b9b2a705e55f5298c7: Status 404 returned error can't find the container with id 90d9c9456eec8f27869f55a76bc65bd7942b123a04bb43b9b2a705e55f5298c7 Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.419786 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.475008 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.497099 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.613197 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.869859 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:28:11 crc kubenswrapper[5023]: I0219 08:28:11.869946 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.236228 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e2376ce7-7c47-4c38-b062-c076da4fdbbc","Type":"ContainerStarted","Data":"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.236528 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e2376ce7-7c47-4c38-b062-c076da4fdbbc","Type":"ContainerStarted","Data":"94d246d0b91c548b275d70506250d3c3e04a770032e73de50948095624fc6bd7"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.238247 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba1f4b72-bd83-407b-95ff-0c5f081433dc","Type":"ContainerStarted","Data":"337ffc8dd95b077005dc7dac668356effa8273025493f72a8072751bcbd5e3dd"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.238282 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba1f4b72-bd83-407b-95ff-0c5f081433dc","Type":"ContainerStarted","Data":"a280918c28ed3eb41186fc60a48ddef0fade75d848792c07e7aba781e292442b"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.241836 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerStarted","Data":"99cb880491fd8ab400dee606ef06a7445310df4e57ba84a6e1390ab583075e69"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.241872 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerStarted","Data":"545226bffda62aaa7d9f7cd601ea0b08dd60eb63adf9cd20c30b8f21ed6d9c4d"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.241882 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerStarted","Data":"90d9c9456eec8f27869f55a76bc65bd7942b123a04bb43b9b2a705e55f5298c7"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.242597 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.248176 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerStarted","Data":"8c59460cc1ed50e42e5eb0a89ae7ec6b2ecc6b5adb1874a1bf01e05051a1ed47"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.248277 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerStarted","Data":"e60d686ae53b3fde70d37b1a8a267fb317392ee3f8052a84aa2908d70f7ed95f"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.248294 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerStarted","Data":"eb00e3721ba9052cb3aefa68dc2c6b55066f4b559a3d37f10895248c17ea7b81"} Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.250923 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.255584 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.255568237 podStartE2EDuration="2.255568237s" podCreationTimestamp="2026-02-19 08:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:28:12.255200167 +0000 UTC m=+1649.912319115" watchObservedRunningTime="2026-02-19 08:28:12.255568237 +0000 UTC m=+1649.912687185" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.304256 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.304233179 podStartE2EDuration="2.304233179s" podCreationTimestamp="2026-02-19 08:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:28:12.281247928 +0000 UTC m=+1649.938366876" watchObservedRunningTime="2026-02-19 08:28:12.304233179 +0000 UTC m=+1649.961352127" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.351646 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.3515564749999998 podStartE2EDuration="2.351556475s" podCreationTimestamp="2026-02-19 08:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:28:12.335222572 +0000 UTC m=+1649.992341520" watchObservedRunningTime="2026-02-19 08:28:12.351556475 +0000 UTC m=+1650.008675433" Feb 19 08:28:12 crc kubenswrapper[5023]: I0219 08:28:12.362938 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.362906207 podStartE2EDuration="2.362906207s" podCreationTimestamp="2026-02-19 08:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:28:12.355240853 +0000 UTC m=+1650.012359801" watchObservedRunningTime="2026-02-19 08:28:12.362906207 +0000 UTC m=+1650.020025155" Feb 19 08:28:14 crc kubenswrapper[5023]: I0219 08:28:14.266308 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:28:14 crc kubenswrapper[5023]: I0219 08:28:14.484150 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:14 crc kubenswrapper[5023]: I0219 08:28:14.994121 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:15 crc kubenswrapper[5023]: I0219 08:28:15.847273 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:15 crc kubenswrapper[5023]: I0219 08:28:15.898384 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:15 crc kubenswrapper[5023]: I0219 08:28:15.994965 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:20 crc kubenswrapper[5023]: E0219 08:28:20.679026 5023 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.153:39412->38.102.83.153:46331: write tcp 38.102.83.153:39412->38.102.83.153:46331: write: broken pipe Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.846957 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.853653 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.898429 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.904371 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.919068 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.948761 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:20 crc kubenswrapper[5023]: I0219 08:28:20.994464 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.019735 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.349833 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.367821 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.370706 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.427691 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:28:21 crc kubenswrapper[5023]: I0219 08:28:21.431356 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:28:23 crc kubenswrapper[5023]: I0219 08:28:23.368676 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.416191 5023 generic.go:334] "Generic (PLEG): container finished" podID="ee7695b1-3519-4641-9c6c-efeb72590155" containerID="25eba656ac038bafdb4e0365e27489f0556e184231ad6894c2670bcfe83893f2" exitCode=137 Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.416245 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerDied","Data":"25eba656ac038bafdb4e0365e27489f0556e184231ad6894c2670bcfe83893f2"} Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.570185 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673097 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673264 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673369 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673463 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673505 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673574 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673669 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnck7\" (UniqueName: \"kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673707 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.673700 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.674143 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.674258 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.678858 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts" (OuterVolumeSpecName: "scripts") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.684755 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7" (OuterVolumeSpecName: "kube-api-access-pnck7") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "kube-api-access-pnck7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.701161 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.722016 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.744751 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.774615 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data" (OuterVolumeSpecName: "config-data") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.775701 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") pod \"ee7695b1-3519-4641-9c6c-efeb72590155\" (UID: \"ee7695b1-3519-4641-9c6c-efeb72590155\") " Feb 19 08:28:28 crc kubenswrapper[5023]: W0219 08:28:28.775959 5023 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ee7695b1-3519-4641-9c6c-efeb72590155/volumes/kubernetes.io~secret/config-data Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776027 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data" (OuterVolumeSpecName: "config-data") pod "ee7695b1-3519-4641-9c6c-efeb72590155" (UID: "ee7695b1-3519-4641-9c6c-efeb72590155"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776080 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776099 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776109 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee7695b1-3519-4641-9c6c-efeb72590155-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776118 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776127 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnck7\" (UniqueName: \"kubernetes.io/projected/ee7695b1-3519-4641-9c6c-efeb72590155-kube-api-access-pnck7\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776137 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:28 crc kubenswrapper[5023]: I0219 08:28:28.776145 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee7695b1-3519-4641-9c6c-efeb72590155-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.418220 5023 scope.go:117] "RemoveContainer" containerID="5b9d63921e1fef29d6a528b1a7c13b0935bd47a4f627320143bad8275bfd7e3e" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.428444 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ee7695b1-3519-4641-9c6c-efeb72590155","Type":"ContainerDied","Data":"c12d49f1ebed5a136def37b433bd35abf1f2102d09b91cf747e68f7b53db554a"} Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.428493 5023 scope.go:117] "RemoveContainer" containerID="25eba656ac038bafdb4e0365e27489f0556e184231ad6894c2670bcfe83893f2" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.428617 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.434996 5023 scope.go:117] "RemoveContainer" containerID="e0b69cc32bed5c5efc3ae9e7d09fa9a33e3e72bf12c5234672652b1d9b4a3444" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.446568 5023 scope.go:117] "RemoveContainer" containerID="399703c5659b0e0b320bbbe74a044fcac97675d869c6ac52d88572c43f5a6741" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.459162 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.469466 5023 scope.go:117] "RemoveContainer" containerID="e78831a1c8143bc0c339a21f8d922671ae004379833f838c88187841b1f12ff6" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501052 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501101 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:28:29 crc kubenswrapper[5023]: E0219 08:28:29.501454 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="sg-core" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501470 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="sg-core" Feb 19 08:28:29 crc kubenswrapper[5023]: E0219 08:28:29.501486 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-central-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501522 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-central-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: E0219 08:28:29.501558 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-notification-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501568 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-notification-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: E0219 08:28:29.501585 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="proxy-httpd" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501591 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="proxy-httpd" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501842 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="proxy-httpd" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501855 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="sg-core" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501885 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-central-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.501905 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" containerName="ceilometer-notification-agent" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.504241 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.508993 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.553212 5023 scope.go:117] "RemoveContainer" containerID="b01f82dfbfb3a70cfb6209fae7669bdb687294618f7d14989fa2090408152ef8" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.553639 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.553882 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.554087 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590427 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590468 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590499 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590522 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590550 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7hg\" (UniqueName: \"kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590575 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590605 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.590623 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.613375 5023 scope.go:117] "RemoveContainer" containerID="b690764de2f85c6d812db01c60728749823cff968709686350762ffeb0df2781" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692305 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692365 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692397 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692422 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692457 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7hg\" (UniqueName: \"kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692549 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692602 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.692704 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.694284 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.694377 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.698409 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.698420 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.698569 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.699489 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.699654 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.710746 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7hg\" (UniqueName: \"kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg\") pod \"ceilometer-0\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:29 crc kubenswrapper[5023]: I0219 08:28:29.885834 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:30 crc kubenswrapper[5023]: I0219 08:28:30.343864 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:28:30 crc kubenswrapper[5023]: I0219 08:28:30.437642 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerStarted","Data":"05c85e241c1afcb08310f2bbf9ac82ef61d7fbc60c8684a3c3e48a3c9632674e"} Feb 19 08:28:31 crc kubenswrapper[5023]: I0219 08:28:31.459515 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerStarted","Data":"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a"} Feb 19 08:28:31 crc kubenswrapper[5023]: I0219 08:28:31.509708 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee7695b1-3519-4641-9c6c-efeb72590155" path="/var/lib/kubelet/pods/ee7695b1-3519-4641-9c6c-efeb72590155/volumes" Feb 19 08:28:32 crc kubenswrapper[5023]: I0219 08:28:32.471186 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerStarted","Data":"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a"} Feb 19 08:28:33 crc kubenswrapper[5023]: I0219 08:28:33.485332 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerStarted","Data":"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4"} Feb 19 08:28:34 crc kubenswrapper[5023]: I0219 08:28:34.496769 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerStarted","Data":"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc"} Feb 19 08:28:34 crc kubenswrapper[5023]: I0219 08:28:34.497235 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:28:34 crc kubenswrapper[5023]: I0219 08:28:34.518366 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.703218331 podStartE2EDuration="5.518341709s" podCreationTimestamp="2026-02-19 08:28:29 +0000 UTC" firstStartedPulling="2026-02-19 08:28:30.345437972 +0000 UTC m=+1668.002556920" lastFinishedPulling="2026-02-19 08:28:34.16056135 +0000 UTC m=+1671.817680298" observedRunningTime="2026-02-19 08:28:34.514916528 +0000 UTC m=+1672.172035486" watchObservedRunningTime="2026-02-19 08:28:34.518341709 +0000 UTC m=+1672.175460657" Feb 19 08:28:41 crc kubenswrapper[5023]: I0219 08:28:41.870811 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:28:41 crc kubenswrapper[5023]: I0219 08:28:41.871908 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:28:41 crc kubenswrapper[5023]: I0219 08:28:41.871994 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:28:41 crc kubenswrapper[5023]: I0219 08:28:41.873296 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:28:41 crc kubenswrapper[5023]: I0219 08:28:41.873407 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" gracePeriod=600 Feb 19 08:28:41 crc kubenswrapper[5023]: E0219 08:28:41.997887 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:28:42 crc kubenswrapper[5023]: I0219 08:28:42.562863 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" exitCode=0 Feb 19 08:28:42 crc kubenswrapper[5023]: I0219 08:28:42.562949 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848"} Feb 19 08:28:42 crc kubenswrapper[5023]: I0219 08:28:42.563257 5023 scope.go:117] "RemoveContainer" containerID="647f5b89cada4aacd5c7cd75ae79b817efe4579aa22dd7a81e01906e874d0fd6" Feb 19 08:28:42 crc kubenswrapper[5023]: I0219 08:28:42.563950 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:28:42 crc kubenswrapper[5023]: E0219 08:28:42.564359 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:28:56 crc kubenswrapper[5023]: I0219 08:28:56.477658 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:28:56 crc kubenswrapper[5023]: E0219 08:28:56.478377 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:28:59 crc kubenswrapper[5023]: I0219 08:28:59.896100 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.206314 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg"] Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.212070 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.216980 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.217254 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.222820 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg"] Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.251696 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzl8w\" (UniqueName: \"kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.251852 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.252044 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.252116 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.353963 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzl8w\" (UniqueName: \"kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.354036 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.354081 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.354126 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.362185 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.366552 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.368533 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.368862 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzl8w\" (UniqueName: \"kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w\") pod \"watcher-kuttl-db-purge-29524829-p5rwg\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:00 crc kubenswrapper[5023]: I0219 08:29:00.538490 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:01 crc kubenswrapper[5023]: I0219 08:29:01.003943 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg"] Feb 19 08:29:01 crc kubenswrapper[5023]: I0219 08:29:01.739597 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" event={"ID":"597f5520-4bea-4115-8a7b-486ea2948e4a","Type":"ContainerStarted","Data":"221de5c78ff65ec411f137c5a011e175822e722191e975380d26b15351192f50"} Feb 19 08:29:01 crc kubenswrapper[5023]: I0219 08:29:01.739943 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" event={"ID":"597f5520-4bea-4115-8a7b-486ea2948e4a","Type":"ContainerStarted","Data":"40a9f6ab5d4cba7962987447bf4b8dee19f56a1d429bf384ee765668a2b26daa"} Feb 19 08:29:01 crc kubenswrapper[5023]: I0219 08:29:01.763184 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" podStartSLOduration=1.7631581330000001 podStartE2EDuration="1.763158133s" podCreationTimestamp="2026-02-19 08:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:01.754348439 +0000 UTC m=+1699.411467397" watchObservedRunningTime="2026-02-19 08:29:01.763158133 +0000 UTC m=+1699.420277101" Feb 19 08:29:03 crc kubenswrapper[5023]: I0219 08:29:03.075312 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-mrpbv"] Feb 19 08:29:03 crc kubenswrapper[5023]: I0219 08:29:03.083463 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-mrpbv"] Feb 19 08:29:03 crc kubenswrapper[5023]: I0219 08:29:03.487593 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ffbb79-fdf2-40d0-9f7b-09b5d8441476" path="/var/lib/kubelet/pods/58ffbb79-fdf2-40d0-9f7b-09b5d8441476/volumes" Feb 19 08:29:03 crc kubenswrapper[5023]: I0219 08:29:03.756501 5023 generic.go:334] "Generic (PLEG): container finished" podID="597f5520-4bea-4115-8a7b-486ea2948e4a" containerID="221de5c78ff65ec411f137c5a011e175822e722191e975380d26b15351192f50" exitCode=0 Feb 19 08:29:03 crc kubenswrapper[5023]: I0219 08:29:03.756547 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" event={"ID":"597f5520-4bea-4115-8a7b-486ea2948e4a","Type":"ContainerDied","Data":"221de5c78ff65ec411f137c5a011e175822e722191e975380d26b15351192f50"} Feb 19 08:29:04 crc kubenswrapper[5023]: I0219 08:29:04.021689 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-92rvd"] Feb 19 08:29:04 crc kubenswrapper[5023]: I0219 08:29:04.028193 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-92rvd"] Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.032292 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-e496-account-create-update-q9z6j"] Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.039438 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-e496-account-create-update-q9z6j"] Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.101423 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.133191 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data\") pod \"597f5520-4bea-4115-8a7b-486ea2948e4a\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.133323 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle\") pod \"597f5520-4bea-4115-8a7b-486ea2948e4a\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.133394 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzl8w\" (UniqueName: \"kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w\") pod \"597f5520-4bea-4115-8a7b-486ea2948e4a\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.133595 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume\") pod \"597f5520-4bea-4115-8a7b-486ea2948e4a\" (UID: \"597f5520-4bea-4115-8a7b-486ea2948e4a\") " Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.138524 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w" (OuterVolumeSpecName: "kube-api-access-lzl8w") pod "597f5520-4bea-4115-8a7b-486ea2948e4a" (UID: "597f5520-4bea-4115-8a7b-486ea2948e4a"). InnerVolumeSpecName "kube-api-access-lzl8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.143002 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "597f5520-4bea-4115-8a7b-486ea2948e4a" (UID: "597f5520-4bea-4115-8a7b-486ea2948e4a"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.160375 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "597f5520-4bea-4115-8a7b-486ea2948e4a" (UID: "597f5520-4bea-4115-8a7b-486ea2948e4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.176996 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data" (OuterVolumeSpecName: "config-data") pod "597f5520-4bea-4115-8a7b-486ea2948e4a" (UID: "597f5520-4bea-4115-8a7b-486ea2948e4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.236145 5023 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-scripts-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.236185 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.236198 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597f5520-4bea-4115-8a7b-486ea2948e4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.236218 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzl8w\" (UniqueName: \"kubernetes.io/projected/597f5520-4bea-4115-8a7b-486ea2948e4a-kube-api-access-lzl8w\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.496780 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38849549-c4bc-427d-8c0c-53e5d7afd2fa" path="/var/lib/kubelet/pods/38849549-c4bc-427d-8c0c-53e5d7afd2fa/volumes" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.497959 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f258e3-c74f-476a-a368-7af467976e2c" path="/var/lib/kubelet/pods/52f258e3-c74f-476a-a368-7af467976e2c/volumes" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.775377 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" event={"ID":"597f5520-4bea-4115-8a7b-486ea2948e4a","Type":"ContainerDied","Data":"40a9f6ab5d4cba7962987447bf4b8dee19f56a1d429bf384ee765668a2b26daa"} Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.775608 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a9f6ab5d4cba7962987447bf4b8dee19f56a1d429bf384ee765668a2b26daa" Feb 19 08:29:05 crc kubenswrapper[5023]: I0219 08:29:05.775471 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.490134 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.501946 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-tdhw9"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.508112 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.514725 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29524829-p5rwg"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.555348 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-b4xf8"] Feb 19 08:29:08 crc kubenswrapper[5023]: E0219 08:29:08.555726 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597f5520-4bea-4115-8a7b-486ea2948e4a" containerName="watcher-db-manage" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.555742 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="597f5520-4bea-4115-8a7b-486ea2948e4a" containerName="watcher-db-manage" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.555897 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="597f5520-4bea-4115-8a7b-486ea2948e4a" containerName="watcher-db-manage" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.556442 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.569713 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-b4xf8"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.595272 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.595439 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.601239 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.601460 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" containerName="watcher-applier" containerID="cri-o://67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.653798 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.654062 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" containerName="watcher-decision-engine" containerID="cri-o://337ffc8dd95b077005dc7dac668356effa8273025493f72a8072751bcbd5e3dd" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.697433 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.697508 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.698482 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.718709 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.719093 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-kuttl-api-log" containerID="cri-o://545226bffda62aaa7d9f7cd601ea0b08dd60eb63adf9cd20c30b8f21ed6d9c4d" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.719178 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-api" containerID="cri-o://99cb880491fd8ab400dee606ef06a7445310df4e57ba84a6e1390ab583075e69" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.731135 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg\") pod \"watchertest-account-delete-b4xf8\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.738465 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.738748 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-kuttl-api-log" containerID="cri-o://e60d686ae53b3fde70d37b1a8a267fb317392ee3f8052a84aa2908d70f7ed95f" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.738787 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-api" containerID="cri-o://8c59460cc1ed50e42e5eb0a89ae7ec6b2ecc6b5adb1874a1bf01e05051a1ed47" gracePeriod=30 Feb 19 08:29:08 crc kubenswrapper[5023]: I0219 08:29:08.913217 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.472087 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-b4xf8"] Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.487472 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4debfe-d881-4d61-bf04-553f1e641ad7" path="/var/lib/kubelet/pods/4f4debfe-d881-4d61-bf04-553f1e641ad7/volumes" Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.488185 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597f5520-4bea-4115-8a7b-486ea2948e4a" path="/var/lib/kubelet/pods/597f5520-4bea-4115-8a7b-486ea2948e4a/volumes" Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.822579 5023 generic.go:334] "Generic (PLEG): container finished" podID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerID="99cb880491fd8ab400dee606ef06a7445310df4e57ba84a6e1390ab583075e69" exitCode=0 Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.822972 5023 generic.go:334] "Generic (PLEG): container finished" podID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerID="545226bffda62aaa7d9f7cd601ea0b08dd60eb63adf9cd20c30b8f21ed6d9c4d" exitCode=143 Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.822654 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerDied","Data":"99cb880491fd8ab400dee606ef06a7445310df4e57ba84a6e1390ab583075e69"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.823081 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerDied","Data":"545226bffda62aaa7d9f7cd601ea0b08dd60eb63adf9cd20c30b8f21ed6d9c4d"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.827889 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" event={"ID":"2d8dc94f-35b1-4538-8375-74a3087409a0","Type":"ContainerStarted","Data":"6e2851ee52a37ae4aba5850a75c342ff8d2df2f5e120b0689786d93d20788285"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.827934 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" event={"ID":"2d8dc94f-35b1-4538-8375-74a3087409a0","Type":"ContainerStarted","Data":"c89d8d53329b3d6f2650c17539238ac636bfcca469d0bf0ad577d241a50ec77c"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.832490 5023 generic.go:334] "Generic (PLEG): container finished" podID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerID="8c59460cc1ed50e42e5eb0a89ae7ec6b2ecc6b5adb1874a1bf01e05051a1ed47" exitCode=0 Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.832519 5023 generic.go:334] "Generic (PLEG): container finished" podID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerID="e60d686ae53b3fde70d37b1a8a267fb317392ee3f8052a84aa2908d70f7ed95f" exitCode=143 Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.832540 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerDied","Data":"8c59460cc1ed50e42e5eb0a89ae7ec6b2ecc6b5adb1874a1bf01e05051a1ed47"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.832576 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerDied","Data":"e60d686ae53b3fde70d37b1a8a267fb317392ee3f8052a84aa2908d70f7ed95f"} Feb 19 08:29:09 crc kubenswrapper[5023]: I0219 08:29:09.858689 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" podStartSLOduration=1.858671832 podStartE2EDuration="1.858671832s" podCreationTimestamp="2026-02-19 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:09.851411299 +0000 UTC m=+1707.508530247" watchObservedRunningTime="2026-02-19 08:29:09.858671832 +0000 UTC m=+1707.515790780" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.195599 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.202579 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254712 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254764 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254817 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qrnb\" (UniqueName: \"kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254840 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xmb8\" (UniqueName: \"kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254894 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254932 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.254962 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255023 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255045 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle\") pod \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\" (UID: \"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255081 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255121 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255153 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data\") pod \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\" (UID: \"6aa4ffe7-15b8-4d56-a9ad-269363c8a496\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255218 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs" (OuterVolumeSpecName: "logs") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.255470 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.257001 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs" (OuterVolumeSpecName: "logs") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.285918 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8" (OuterVolumeSpecName: "kube-api-access-6xmb8") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "kube-api-access-6xmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.294492 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb" (OuterVolumeSpecName: "kube-api-access-8qrnb") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "kube-api-access-8qrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.298998 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.299955 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.307484 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.321726 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.335679 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data" (OuterVolumeSpecName: "config-data") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.336083 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data" (OuterVolumeSpecName: "config-data") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357527 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357575 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qrnb\" (UniqueName: \"kubernetes.io/projected/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-kube-api-access-8qrnb\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357588 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xmb8\" (UniqueName: \"kubernetes.io/projected/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-kube-api-access-6xmb8\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357598 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357606 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357639 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357649 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357657 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.357665 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.358045 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6aa4ffe7-15b8-4d56-a9ad-269363c8a496" (UID: "6aa4ffe7-15b8-4d56-a9ad-269363c8a496"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.384806 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" (UID: "7bc68ef1-4ca4-44ed-80c1-3b657104fc2f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.429461 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.459076 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxtc\" (UniqueName: \"kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc\") pod \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.459377 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs\") pod \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.459509 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data\") pod \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.459594 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls\") pod \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.459756 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle\") pod \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\" (UID: \"e2376ce7-7c47-4c38-b062-c076da4fdbbc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.460128 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.460204 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6aa4ffe7-15b8-4d56-a9ad-269363c8a496-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.461921 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs" (OuterVolumeSpecName: "logs") pod "e2376ce7-7c47-4c38-b062-c076da4fdbbc" (UID: "e2376ce7-7c47-4c38-b062-c076da4fdbbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.465111 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc" (OuterVolumeSpecName: "kube-api-access-fcxtc") pod "e2376ce7-7c47-4c38-b062-c076da4fdbbc" (UID: "e2376ce7-7c47-4c38-b062-c076da4fdbbc"). InnerVolumeSpecName "kube-api-access-fcxtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.476833 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:29:10 crc kubenswrapper[5023]: E0219 08:29:10.477100 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.480017 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2376ce7-7c47-4c38-b062-c076da4fdbbc" (UID: "e2376ce7-7c47-4c38-b062-c076da4fdbbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.518197 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data" (OuterVolumeSpecName: "config-data") pod "e2376ce7-7c47-4c38-b062-c076da4fdbbc" (UID: "e2376ce7-7c47-4c38-b062-c076da4fdbbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.522735 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e2376ce7-7c47-4c38-b062-c076da4fdbbc" (UID: "e2376ce7-7c47-4c38-b062-c076da4fdbbc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.564717 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2376ce7-7c47-4c38-b062-c076da4fdbbc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.564756 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.564766 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.564776 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2376ce7-7c47-4c38-b062-c076da4fdbbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.564784 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxtc\" (UniqueName: \"kubernetes.io/projected/e2376ce7-7c47-4c38-b062-c076da4fdbbc-kube-api-access-fcxtc\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.841741 5023 generic.go:334] "Generic (PLEG): container finished" podID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" containerID="67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822" exitCode=0 Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.841820 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e2376ce7-7c47-4c38-b062-c076da4fdbbc","Type":"ContainerDied","Data":"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.841855 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e2376ce7-7c47-4c38-b062-c076da4fdbbc","Type":"ContainerDied","Data":"94d246d0b91c548b275d70506250d3c3e04a770032e73de50948095624fc6bd7"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.841878 5023 scope.go:117] "RemoveContainer" containerID="67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.842232 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.847968 5023 generic.go:334] "Generic (PLEG): container finished" podID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" containerID="337ffc8dd95b077005dc7dac668356effa8273025493f72a8072751bcbd5e3dd" exitCode=0 Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.848057 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba1f4b72-bd83-407b-95ff-0c5f081433dc","Type":"ContainerDied","Data":"337ffc8dd95b077005dc7dac668356effa8273025493f72a8072751bcbd5e3dd"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.848087 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"ba1f4b72-bd83-407b-95ff-0c5f081433dc","Type":"ContainerDied","Data":"a280918c28ed3eb41186fc60a48ddef0fade75d848792c07e7aba781e292442b"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.848097 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a280918c28ed3eb41186fc60a48ddef0fade75d848792c07e7aba781e292442b" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.850803 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7bc68ef1-4ca4-44ed-80c1-3b657104fc2f","Type":"ContainerDied","Data":"90d9c9456eec8f27869f55a76bc65bd7942b123a04bb43b9b2a705e55f5298c7"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.850849 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.853885 5023 generic.go:334] "Generic (PLEG): container finished" podID="2d8dc94f-35b1-4538-8375-74a3087409a0" containerID="6e2851ee52a37ae4aba5850a75c342ff8d2df2f5e120b0689786d93d20788285" exitCode=0 Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.854008 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" event={"ID":"2d8dc94f-35b1-4538-8375-74a3087409a0","Type":"ContainerDied","Data":"6e2851ee52a37ae4aba5850a75c342ff8d2df2f5e120b0689786d93d20788285"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.874449 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"6aa4ffe7-15b8-4d56-a9ad-269363c8a496","Type":"ContainerDied","Data":"eb00e3721ba9052cb3aefa68dc2c6b55066f4b559a3d37f10895248c17ea7b81"} Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.874542 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.894943 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.945545 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.953694 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.960279 5023 scope.go:117] "RemoveContainer" containerID="67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.960399 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:10 crc kubenswrapper[5023]: E0219 08:29:10.964071 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822\": container with ID starting with 67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822 not found: ID does not exist" containerID="67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.964119 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822"} err="failed to get container status \"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822\": rpc error: code = NotFound desc = could not find container \"67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822\": container with ID starting with 67f48a81cd93044856741b380416491f5ce6be766e4dfb115b43cd7f10c37822 not found: ID does not exist" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.964145 5023 scope.go:117] "RemoveContainer" containerID="99cb880491fd8ab400dee606ef06a7445310df4e57ba84a6e1390ab583075e69" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.970385 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.970699 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4l8\" (UniqueName: \"kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.970812 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.970951 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.971112 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.971203 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data\") pod \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\" (UID: \"ba1f4b72-bd83-407b-95ff-0c5f081433dc\") " Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.974435 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.974873 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs" (OuterVolumeSpecName: "logs") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.985695 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:10 crc kubenswrapper[5023]: I0219 08:29:10.989092 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.009738 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8" (OuterVolumeSpecName: "kube-api-access-xs4l8") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "kube-api-access-xs4l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.016694 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.047702 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.048561 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data" (OuterVolumeSpecName: "config-data") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.066571 5023 scope.go:117] "RemoveContainer" containerID="545226bffda62aaa7d9f7cd601ea0b08dd60eb63adf9cd20c30b8f21ed6d9c4d" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.072609 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1f4b72-bd83-407b-95ff-0c5f081433dc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.072651 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.072664 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.072675 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4l8\" (UniqueName: \"kubernetes.io/projected/ba1f4b72-bd83-407b-95ff-0c5f081433dc-kube-api-access-xs4l8\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.072683 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.073751 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ba1f4b72-bd83-407b-95ff-0c5f081433dc" (UID: "ba1f4b72-bd83-407b-95ff-0c5f081433dc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.091204 5023 scope.go:117] "RemoveContainer" containerID="8c59460cc1ed50e42e5eb0a89ae7ec6b2ecc6b5adb1874a1bf01e05051a1ed47" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.111118 5023 scope.go:117] "RemoveContainer" containerID="e60d686ae53b3fde70d37b1a8a267fb317392ee3f8052a84aa2908d70f7ed95f" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.173892 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ba1f4b72-bd83-407b-95ff-0c5f081433dc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.507826 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" path="/var/lib/kubelet/pods/6aa4ffe7-15b8-4d56-a9ad-269363c8a496/volumes" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.508807 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" path="/var/lib/kubelet/pods/7bc68ef1-4ca4-44ed-80c1-3b657104fc2f/volumes" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.509516 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" path="/var/lib/kubelet/pods/e2376ce7-7c47-4c38-b062-c076da4fdbbc/volumes" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.581413 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.581714 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-central-agent" containerID="cri-o://3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a" gracePeriod=30 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.581743 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="proxy-httpd" containerID="cri-o://7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc" gracePeriod=30 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.581760 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="sg-core" containerID="cri-o://b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4" gracePeriod=30 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.581789 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-notification-agent" containerID="cri-o://ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a" gracePeriod=30 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.887672 5023 generic.go:334] "Generic (PLEG): container finished" podID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerID="7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc" exitCode=0 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.887709 5023 generic.go:334] "Generic (PLEG): container finished" podID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerID="b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4" exitCode=2 Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.887770 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerDied","Data":"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc"} Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.887825 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerDied","Data":"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4"} Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.887846 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.934192 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:11 crc kubenswrapper[5023]: I0219 08:29:11.946366 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.248302 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.289486 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg\") pod \"2d8dc94f-35b1-4538-8375-74a3087409a0\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.289667 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts\") pod \"2d8dc94f-35b1-4538-8375-74a3087409a0\" (UID: \"2d8dc94f-35b1-4538-8375-74a3087409a0\") " Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.290299 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d8dc94f-35b1-4538-8375-74a3087409a0" (UID: "2d8dc94f-35b1-4538-8375-74a3087409a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.294367 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg" (OuterVolumeSpecName: "kube-api-access-zj6lg") pod "2d8dc94f-35b1-4538-8375-74a3087409a0" (UID: "2d8dc94f-35b1-4538-8375-74a3087409a0"). InnerVolumeSpecName "kube-api-access-zj6lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.391451 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8dc94f-35b1-4538-8375-74a3087409a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.391484 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj6lg\" (UniqueName: \"kubernetes.io/projected/2d8dc94f-35b1-4538-8375-74a3087409a0-kube-api-access-zj6lg\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.902899 5023 generic.go:334] "Generic (PLEG): container finished" podID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerID="3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a" exitCode=0 Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.902965 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerDied","Data":"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a"} Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.904013 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" event={"ID":"2d8dc94f-35b1-4538-8375-74a3087409a0","Type":"ContainerDied","Data":"c89d8d53329b3d6f2650c17539238ac636bfcca469d0bf0ad577d241a50ec77c"} Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.904044 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89d8d53329b3d6f2650c17539238ac636bfcca469d0bf0ad577d241a50ec77c" Feb 19 08:29:12 crc kubenswrapper[5023]: I0219 08:29:12.904112 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-b4xf8" Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.486647 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" path="/var/lib/kubelet/pods/ba1f4b72-bd83-407b-95ff-0c5f081433dc/volumes" Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.590302 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lbwh8"] Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.599008 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-lbwh8"] Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.605963 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-b4xf8"] Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.612276 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-4ldwg"] Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.618685 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-4ldwg"] Feb 19 08:29:13 crc kubenswrapper[5023]: I0219 08:29:13.625812 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-b4xf8"] Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.817874 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.927463 5023 generic.go:334] "Generic (PLEG): container finished" podID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerID="ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a" exitCode=0 Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.927558 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerDied","Data":"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a"} Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.927916 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04","Type":"ContainerDied","Data":"05c85e241c1afcb08310f2bbf9ac82ef61d7fbc60c8684a3c3e48a3c9632674e"} Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.927636 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.927974 5023 scope.go:117] "RemoveContainer" containerID="7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.938827 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939152 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939180 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7hg\" (UniqueName: \"kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939215 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939243 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939297 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939341 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.939419 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd\") pod \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\" (UID: \"c0a3bd71-e222-4fde-bce8-e0b0ddc33e04\") " Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.940511 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.941252 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.945016 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts" (OuterVolumeSpecName: "scripts") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.949904 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg" (OuterVolumeSpecName: "kube-api-access-7t7hg") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "kube-api-access-7t7hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.960494 5023 scope.go:117] "RemoveContainer" containerID="b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4" Feb 19 08:29:14 crc kubenswrapper[5023]: I0219 08:29:14.965242 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.011746 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041225 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041267 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7hg\" (UniqueName: \"kubernetes.io/projected/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-kube-api-access-7t7hg\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041284 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041294 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041305 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.041315 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.064575 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data" (OuterVolumeSpecName: "config-data") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.064593 5023 scope.go:117] "RemoveContainer" containerID="ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.072765 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" (UID: "c0a3bd71-e222-4fde-bce8-e0b0ddc33e04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.081505 5023 scope.go:117] "RemoveContainer" containerID="3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.097500 5023 scope.go:117] "RemoveContainer" containerID="7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.097913 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc\": container with ID starting with 7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc not found: ID does not exist" containerID="7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.097962 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc"} err="failed to get container status \"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc\": rpc error: code = NotFound desc = could not find container \"7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc\": container with ID starting with 7ea8718690173e9e506621e728b0dc57171b48abfa83d3ec4612b311705f27cc not found: ID does not exist" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.097992 5023 scope.go:117] "RemoveContainer" containerID="b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.098285 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4\": container with ID starting with b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4 not found: ID does not exist" containerID="b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.098395 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4"} err="failed to get container status \"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4\": rpc error: code = NotFound desc = could not find container \"b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4\": container with ID starting with b0ff515f1026db169d660dcbc433826cdb1d77bfb4483fadee900449c947b6b4 not found: ID does not exist" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.098481 5023 scope.go:117] "RemoveContainer" containerID="ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.098851 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a\": container with ID starting with ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a not found: ID does not exist" containerID="ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.098906 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a"} err="failed to get container status \"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a\": rpc error: code = NotFound desc = could not find container \"ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a\": container with ID starting with ebaa106bd241dfd4a319b957c2b7b62f16df03c5abe901f97441af6abae9358a not found: ID does not exist" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.098937 5023 scope.go:117] "RemoveContainer" containerID="3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.099214 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a\": container with ID starting with 3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a not found: ID does not exist" containerID="3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.099246 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a"} err="failed to get container status \"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a\": rpc error: code = NotFound desc = could not find container \"3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a\": container with ID starting with 3a289c95ee6697e5022bec2541ff24b7d4e20143de010977d4687b702053646a not found: ID does not exist" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.142607 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.142676 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.154914 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-mwlxs"] Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155594 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-notification-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155612 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-notification-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155648 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8dc94f-35b1-4538-8375-74a3087409a0" containerName="mariadb-account-delete" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155655 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8dc94f-35b1-4538-8375-74a3087409a0" containerName="mariadb-account-delete" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155666 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="proxy-httpd" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155673 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="proxy-httpd" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155691 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155698 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155724 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" containerName="watcher-applier" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155732 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" containerName="watcher-applier" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.155749 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.155757 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.166402 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-central-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.166673 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-central-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.166749 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" containerName="watcher-decision-engine" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.166818 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" containerName="watcher-decision-engine" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.166870 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="sg-core" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.166935 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="sg-core" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.167004 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.167063 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: E0219 08:29:15.167137 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.167205 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.167746 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1f4b72-bd83-407b-95ff-0c5f081433dc" containerName="watcher-decision-engine" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.167850 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8dc94f-35b1-4538-8375-74a3087409a0" containerName="mariadb-account-delete" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.167952 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="sg-core" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168050 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168118 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa4ffe7-15b8-4d56-a9ad-269363c8a496" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168180 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-api" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168248 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc68ef1-4ca4-44ed-80c1-3b657104fc2f" containerName="watcher-kuttl-api-log" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168308 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-central-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168402 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="ceilometer-notification-agent" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168885 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" containerName="proxy-httpd" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.168959 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2376ce7-7c47-4c38-b062-c076da4fdbbc" containerName="watcher-applier" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.169874 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.228018 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.231467 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.273641 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.276893 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fxd\" (UniqueName: \"kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.277017 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.278698 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mwlxs"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.302718 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.356692 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.378008 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.378350 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.378503 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqj7w\" (UniqueName: \"kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.378666 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4fxd\" (UniqueName: \"kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.380094 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.380841 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.405673 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.407709 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.408367 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4fxd\" (UniqueName: \"kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd\") pod \"watcher-db-create-mwlxs\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.426671 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.427381 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.427509 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.434260 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479773 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479839 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479862 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479888 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479916 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fkk\" (UniqueName: \"kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479939 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.479978 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqj7w\" (UniqueName: \"kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.480007 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.480032 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.480053 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.480782 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.498598 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8dc94f-35b1-4538-8375-74a3087409a0" path="/var/lib/kubelet/pods/2d8dc94f-35b1-4538-8375-74a3087409a0/volumes" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.502387 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a3bd71-e222-4fde-bce8-e0b0ddc33e04" path="/var/lib/kubelet/pods/c0a3bd71-e222-4fde-bce8-e0b0ddc33e04/volumes" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.503349 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c967911b-7232-46f9-b9dc-98571984b719" path="/var/lib/kubelet/pods/c967911b-7232-46f9-b9dc-98571984b719/volumes" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.503488 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqj7w\" (UniqueName: \"kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w\") pod \"watcher-2a51-account-create-update-bbdz7\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.504003 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb71e49-7f8d-4cf5-afd9-95a14a36325e" path="/var/lib/kubelet/pods/deb71e49-7f8d-4cf5-afd9-95a14a36325e/volumes" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.508924 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591668 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fkk\" (UniqueName: \"kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591728 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591827 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591855 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591884 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591953 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.591999 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.592024 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.594822 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.595153 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.595305 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.596570 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.597078 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.599600 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.600763 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.618972 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fkk\" (UniqueName: \"kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.623167 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:15 crc kubenswrapper[5023]: I0219 08:29:15.858288 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.076697 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mwlxs"] Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.149145 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7"] Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.287566 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:16 crc kubenswrapper[5023]: W0219 08:29:16.290076 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98a718d_963c_477d_875a_d0120df577a9.slice/crio-3e3f44a1f8e89b27ee504d08163af6af276f1c90aa2f1b3354dc2ff5d09a3825 WatchSource:0}: Error finding container 3e3f44a1f8e89b27ee504d08163af6af276f1c90aa2f1b3354dc2ff5d09a3825: Status 404 returned error can't find the container with id 3e3f44a1f8e89b27ee504d08163af6af276f1c90aa2f1b3354dc2ff5d09a3825 Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.947991 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerStarted","Data":"3e3f44a1f8e89b27ee504d08163af6af276f1c90aa2f1b3354dc2ff5d09a3825"} Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.950179 5023 generic.go:334] "Generic (PLEG): container finished" podID="82b56214-628d-4025-b897-877f5cc251a0" containerID="c5fff8d37df38abf87c87fc576230474be30d7b3e9a19fa7872e2e8e14d5f403" exitCode=0 Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.950248 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mwlxs" event={"ID":"82b56214-628d-4025-b897-877f5cc251a0","Type":"ContainerDied","Data":"c5fff8d37df38abf87c87fc576230474be30d7b3e9a19fa7872e2e8e14d5f403"} Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.950493 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mwlxs" event={"ID":"82b56214-628d-4025-b897-877f5cc251a0","Type":"ContainerStarted","Data":"e762d3ddb0f34bd39bd49690669376df91ce1cfe824987063a365ebe00896ac6"} Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.951851 5023 generic.go:334] "Generic (PLEG): container finished" podID="fcd40cf0-df29-4446-89f5-06fc184f01d0" containerID="b6f5fd09fead194263061f8590deedfaf550754478f2f961fb628d72f4c78862" exitCode=0 Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.951901 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" event={"ID":"fcd40cf0-df29-4446-89f5-06fc184f01d0","Type":"ContainerDied","Data":"b6f5fd09fead194263061f8590deedfaf550754478f2f961fb628d72f4c78862"} Feb 19 08:29:16 crc kubenswrapper[5023]: I0219 08:29:16.951929 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" event={"ID":"fcd40cf0-df29-4446-89f5-06fc184f01d0","Type":"ContainerStarted","Data":"63d6879f95f6228845517af7bb1bbf0dcabdfcc91cb5221c7ca2debf9eaeab07"} Feb 19 08:29:17 crc kubenswrapper[5023]: I0219 08:29:17.970393 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerStarted","Data":"c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f"} Feb 19 08:29:17 crc kubenswrapper[5023]: I0219 08:29:17.970961 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerStarted","Data":"f3b8ec632030c716431bb872336dd561e8bf680f25dfe5ab90e1273041c29a3c"} Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.377510 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.436525 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.438562 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts\") pod \"82b56214-628d-4025-b897-877f5cc251a0\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.438729 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4fxd\" (UniqueName: \"kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd\") pod \"82b56214-628d-4025-b897-877f5cc251a0\" (UID: \"82b56214-628d-4025-b897-877f5cc251a0\") " Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.439242 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82b56214-628d-4025-b897-877f5cc251a0" (UID: "82b56214-628d-4025-b897-877f5cc251a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.443926 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd" (OuterVolumeSpecName: "kube-api-access-d4fxd") pod "82b56214-628d-4025-b897-877f5cc251a0" (UID: "82b56214-628d-4025-b897-877f5cc251a0"). InnerVolumeSpecName "kube-api-access-d4fxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.540247 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqj7w\" (UniqueName: \"kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w\") pod \"fcd40cf0-df29-4446-89f5-06fc184f01d0\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.540448 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts\") pod \"fcd40cf0-df29-4446-89f5-06fc184f01d0\" (UID: \"fcd40cf0-df29-4446-89f5-06fc184f01d0\") " Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.540876 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b56214-628d-4025-b897-877f5cc251a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.540898 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4fxd\" (UniqueName: \"kubernetes.io/projected/82b56214-628d-4025-b897-877f5cc251a0-kube-api-access-d4fxd\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.544845 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcd40cf0-df29-4446-89f5-06fc184f01d0" (UID: "fcd40cf0-df29-4446-89f5-06fc184f01d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.577457 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w" (OuterVolumeSpecName: "kube-api-access-fqj7w") pod "fcd40cf0-df29-4446-89f5-06fc184f01d0" (UID: "fcd40cf0-df29-4446-89f5-06fc184f01d0"). InnerVolumeSpecName "kube-api-access-fqj7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.642894 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd40cf0-df29-4446-89f5-06fc184f01d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.642947 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqj7w\" (UniqueName: \"kubernetes.io/projected/fcd40cf0-df29-4446-89f5-06fc184f01d0-kube-api-access-fqj7w\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.979781 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-mwlxs" event={"ID":"82b56214-628d-4025-b897-877f5cc251a0","Type":"ContainerDied","Data":"e762d3ddb0f34bd39bd49690669376df91ce1cfe824987063a365ebe00896ac6"} Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.979851 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e762d3ddb0f34bd39bd49690669376df91ce1cfe824987063a365ebe00896ac6" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.979796 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-mwlxs" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.981711 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.981715 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7" event={"ID":"fcd40cf0-df29-4446-89f5-06fc184f01d0","Type":"ContainerDied","Data":"63d6879f95f6228845517af7bb1bbf0dcabdfcc91cb5221c7ca2debf9eaeab07"} Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.981763 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d6879f95f6228845517af7bb1bbf0dcabdfcc91cb5221c7ca2debf9eaeab07" Feb 19 08:29:18 crc kubenswrapper[5023]: I0219 08:29:18.983943 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerStarted","Data":"57d145e0a4326404471bae9169f2f1e8fe27640b86bdda933667451f790daf91"} Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.496454 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb"] Feb 19 08:29:20 crc kubenswrapper[5023]: E0219 08:29:20.497556 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd40cf0-df29-4446-89f5-06fc184f01d0" containerName="mariadb-account-create-update" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.497572 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd40cf0-df29-4446-89f5-06fc184f01d0" containerName="mariadb-account-create-update" Feb 19 08:29:20 crc kubenswrapper[5023]: E0219 08:29:20.497594 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b56214-628d-4025-b897-877f5cc251a0" containerName="mariadb-database-create" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.497600 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b56214-628d-4025-b897-877f5cc251a0" containerName="mariadb-database-create" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.497843 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd40cf0-df29-4446-89f5-06fc184f01d0" containerName="mariadb-account-create-update" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.497856 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b56214-628d-4025-b897-877f5cc251a0" containerName="mariadb-database-create" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.498414 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.501279 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.501714 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-g7ghg" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.507899 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb"] Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.569958 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgm5\" (UniqueName: \"kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.570034 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.570099 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.570144 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.671627 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgm5\" (UniqueName: \"kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.672012 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.672154 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.672274 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.685411 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.685548 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.690114 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.690714 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgm5\" (UniqueName: \"kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5\") pod \"watcher-kuttl-db-sync-qqnxb\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:20 crc kubenswrapper[5023]: I0219 08:29:20.817485 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:21 crc kubenswrapper[5023]: I0219 08:29:21.007927 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerStarted","Data":"ee0daf028a717d8e42c4e08ac95f6f11cefa1094c121ed8642cdfcd223bf193e"} Feb 19 08:29:21 crc kubenswrapper[5023]: I0219 08:29:21.019047 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:21 crc kubenswrapper[5023]: I0219 08:29:21.050555 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.38210965 podStartE2EDuration="6.050529653s" podCreationTimestamp="2026-02-19 08:29:15 +0000 UTC" firstStartedPulling="2026-02-19 08:29:16.293480886 +0000 UTC m=+1713.950599834" lastFinishedPulling="2026-02-19 08:29:19.961900879 +0000 UTC m=+1717.619019837" observedRunningTime="2026-02-19 08:29:21.045192092 +0000 UTC m=+1718.702311040" watchObservedRunningTime="2026-02-19 08:29:21.050529653 +0000 UTC m=+1718.707648601" Feb 19 08:29:21 crc kubenswrapper[5023]: I0219 08:29:21.314663 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb"] Feb 19 08:29:22 crc kubenswrapper[5023]: I0219 08:29:22.019107 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" event={"ID":"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe","Type":"ContainerStarted","Data":"f6bd80e637fed7d37712e1e1418c53837c3193dcf87b694fdfa3ef2f1292cb5b"} Feb 19 08:29:22 crc kubenswrapper[5023]: I0219 08:29:22.019385 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" event={"ID":"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe","Type":"ContainerStarted","Data":"6024fdb3711bc5c8f7a79f1ca3367c1bb735332a669e79b25d4bd300c073513b"} Feb 19 08:29:22 crc kubenswrapper[5023]: I0219 08:29:22.046230 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" podStartSLOduration=2.046211001 podStartE2EDuration="2.046211001s" podCreationTimestamp="2026-02-19 08:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:22.042704478 +0000 UTC m=+1719.699823426" watchObservedRunningTime="2026-02-19 08:29:22.046211001 +0000 UTC m=+1719.703329949" Feb 19 08:29:24 crc kubenswrapper[5023]: I0219 08:29:24.043317 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-98gh5"] Feb 19 08:29:24 crc kubenswrapper[5023]: I0219 08:29:24.054721 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-98gh5"] Feb 19 08:29:25 crc kubenswrapper[5023]: I0219 08:29:25.044129 5023 generic.go:334] "Generic (PLEG): container finished" podID="7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" containerID="f6bd80e637fed7d37712e1e1418c53837c3193dcf87b694fdfa3ef2f1292cb5b" exitCode=0 Feb 19 08:29:25 crc kubenswrapper[5023]: I0219 08:29:25.044180 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" event={"ID":"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe","Type":"ContainerDied","Data":"f6bd80e637fed7d37712e1e1418c53837c3193dcf87b694fdfa3ef2f1292cb5b"} Feb 19 08:29:25 crc kubenswrapper[5023]: I0219 08:29:25.477476 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:29:25 crc kubenswrapper[5023]: E0219 08:29:25.478018 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:29:25 crc kubenswrapper[5023]: I0219 08:29:25.487699 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9007a92-1ba7-475f-a227-a36537264ead" path="/var/lib/kubelet/pods/c9007a92-1ba7-475f-a227-a36537264ead/volumes" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.386722 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.474687 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle\") pod \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.474761 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data\") pod \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.474857 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrgm5\" (UniqueName: \"kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5\") pod \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.475004 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data\") pod \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\" (UID: \"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe\") " Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.479780 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" (UID: "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.480307 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5" (OuterVolumeSpecName: "kube-api-access-rrgm5") pod "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" (UID: "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe"). InnerVolumeSpecName "kube-api-access-rrgm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.501926 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" (UID: "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.513439 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data" (OuterVolumeSpecName: "config-data") pod "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" (UID: "7f80b860-c7ce-4a16-a516-3d3ec01cc8fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.576857 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.576881 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.576890 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:26 crc kubenswrapper[5023]: I0219 08:29:26.576899 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrgm5\" (UniqueName: \"kubernetes.io/projected/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe-kube-api-access-rrgm5\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.079076 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" event={"ID":"7f80b860-c7ce-4a16-a516-3d3ec01cc8fe","Type":"ContainerDied","Data":"6024fdb3711bc5c8f7a79f1ca3367c1bb735332a669e79b25d4bd300c073513b"} Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.079125 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6024fdb3711bc5c8f7a79f1ca3367c1bb735332a669e79b25d4bd300c073513b" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.079140 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.365471 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: E0219 08:29:27.365931 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" containerName="watcher-kuttl-db-sync" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.365951 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" containerName="watcher-kuttl-db-sync" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.366130 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" containerName="watcher-kuttl-db-sync" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.367262 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.370291 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-g7ghg" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.370850 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.377215 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.378571 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: W0219 08:29:27.380822 5023 reflector.go:561] object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data": failed to list *v1.Secret: secrets "watcher-kuttl-decision-engine-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "watcher-kuttl-default": no relationship found between node 'crc' and this object Feb 19 08:29:27 crc kubenswrapper[5023]: E0219 08:29:27.380871 5023 reflector.go:158] "Unhandled Error" err="object-\"watcher-kuttl-default\"/\"watcher-kuttl-decision-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"watcher-kuttl-decision-engine-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"watcher-kuttl-default\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.387938 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391559 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391629 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391664 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391710 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391738 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391769 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391889 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f46f\" (UniqueName: \"kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.391970 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.392081 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.392125 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.392219 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.392285 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhbh\" (UniqueName: \"kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.402530 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.476105 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.479292 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.486272 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493741 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493791 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chhbh\" (UniqueName: \"kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493832 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493861 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pxs\" (UniqueName: \"kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493881 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493919 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493942 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.493972 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494011 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494024 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494048 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494091 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494115 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f46f\" (UniqueName: \"kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494148 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494164 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494231 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.494250 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.498956 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.499839 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.500033 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.503508 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.515087 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.515431 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.516794 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.517119 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.525334 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f46f\" (UniqueName: \"kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.528362 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.528780 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chhbh\" (UniqueName: \"kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.541119 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.595234 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.595313 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.595400 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pxs\" (UniqueName: \"kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.595431 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.595455 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.596350 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.602984 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.603011 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.603057 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.614214 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pxs\" (UniqueName: \"kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs\") pod \"watcher-kuttl-applier-0\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.682122 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:27 crc kubenswrapper[5023]: I0219 08:29:27.805523 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:28 crc kubenswrapper[5023]: I0219 08:29:28.116493 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:29:28 crc kubenswrapper[5023]: I0219 08:29:28.249059 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:29:28 crc kubenswrapper[5023]: E0219 08:29:28.497898 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: failed to sync secret cache: timed out waiting for the condition Feb 19 08:29:28 crc kubenswrapper[5023]: E0219 08:29:28.498063 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data podName:3e943a47-fba0-42a0-9ef7-f9f677a48428 nodeName:}" failed. No retries permitted until 2026-02-19 08:29:28.997974405 +0000 UTC m=+1726.655093353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428") : failed to sync secret cache: timed out waiting for the condition Feb 19 08:29:28 crc kubenswrapper[5023]: I0219 08:29:28.917332 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.020542 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.030697 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.161034 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerStarted","Data":"8ce04b38b8007f31e3f7e088e4de0e10ebcddad780366545e0429dbbd5a5ef4f"} Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.161078 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerStarted","Data":"22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d"} Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.161089 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerStarted","Data":"24f8b81f765210c6c6ab02a665f45be10119b00d2487be9485a02d876ca56b57"} Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.162071 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.179964 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a27efcc0-c658-4771-8c7c-ab39b0318d81","Type":"ContainerStarted","Data":"d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083"} Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.180010 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a27efcc0-c658-4771-8c7c-ab39b0318d81","Type":"ContainerStarted","Data":"2b644126e2f16f6c6bebdcdd81b5c8a7907fd863c953f0bbaafa6b261474feb4"} Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.235659 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.266235 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.266205153 podStartE2EDuration="2.266205153s" podCreationTimestamp="2026-02-19 08:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:29.197803967 +0000 UTC m=+1726.854922905" watchObservedRunningTime="2026-02-19 08:29:29.266205153 +0000 UTC m=+1726.923324101" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.696278 5023 scope.go:117] "RemoveContainer" containerID="751621a94a5f21e1ed5844eb84af541e4024be153dbed9d516e75c068c368299" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.768058 5023 scope.go:117] "RemoveContainer" containerID="ddc00f0fa8076b068a184fb0e6ca440b1e8c4b770798c7f401c593486fdcde42" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.795553 5023 scope.go:117] "RemoveContainer" containerID="335bc3aa740c637dd201b05f8900bea454014e637bdc45d736de8189af556440" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.800352 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.800323905 podStartE2EDuration="2.800323905s" podCreationTimestamp="2026-02-19 08:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:29.287524979 +0000 UTC m=+1726.944643947" watchObservedRunningTime="2026-02-19 08:29:29.800323905 +0000 UTC m=+1727.457442853" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.808810 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.823411 5023 scope.go:117] "RemoveContainer" containerID="7814f4b666bdd2c0c72ae7f9c4a660b6b6a090f6be60f4868ab2400199525930" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.939532 5023 scope.go:117] "RemoveContainer" containerID="b6e65b0db983b478841c12b14b6e0e191c4fea7fc8070568c3456c9690ceb8b4" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.969866 5023 scope.go:117] "RemoveContainer" containerID="007a819489803ea365f7078f0b7c3a4d2df350acab0892639a03fda3a812f4b6" Feb 19 08:29:29 crc kubenswrapper[5023]: I0219 08:29:29.993583 5023 scope.go:117] "RemoveContainer" containerID="3349f3266e81227735ec32860d68cfdbcd84a69b2152c231b841d1d6fe3eadbf" Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.023841 5023 scope.go:117] "RemoveContainer" containerID="6dc37f692f324c1c018d21f4a2e2f05ba852fbb58deb874120db64abd04fe040" Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.053527 5023 scope.go:117] "RemoveContainer" containerID="6363cb17270ab2befd2183b68c08f6b98f5d742f6dbc4d7ebbd5b4801810e23b" Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.088007 5023 scope.go:117] "RemoveContainer" containerID="e87eda9655712a805c36ed04260e899b3bd65e0b65856cdc07bcd00258ee76ff" Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.113190 5023 scope.go:117] "RemoveContainer" containerID="995a0c867704f200b461eb259cd0ebafddaab6254ec87f7565f8111e0f8d3427" Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.194311 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e943a47-fba0-42a0-9ef7-f9f677a48428","Type":"ContainerStarted","Data":"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649"} Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.194361 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e943a47-fba0-42a0-9ef7-f9f677a48428","Type":"ContainerStarted","Data":"cd791fdc675dbb146d0a644e10b22dbc280f9de2b255424a2a31781980fd5247"} Feb 19 08:29:30 crc kubenswrapper[5023]: I0219 08:29:30.242609 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.242588418 podStartE2EDuration="3.242588418s" podCreationTimestamp="2026-02-19 08:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:29:30.240172814 +0000 UTC m=+1727.897291762" watchObservedRunningTime="2026-02-19 08:29:30.242588418 +0000 UTC m=+1727.899707366" Feb 19 08:29:31 crc kubenswrapper[5023]: I0219 08:29:31.220868 5023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 08:29:31 crc kubenswrapper[5023]: I0219 08:29:31.300113 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:31 crc kubenswrapper[5023]: I0219 08:29:31.669384 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:32 crc kubenswrapper[5023]: I0219 08:29:32.533477 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:32 crc kubenswrapper[5023]: I0219 08:29:32.682699 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:32 crc kubenswrapper[5023]: I0219 08:29:32.805635 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:33 crc kubenswrapper[5023]: I0219 08:29:33.714292 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:34 crc kubenswrapper[5023]: I0219 08:29:34.900406 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:36 crc kubenswrapper[5023]: I0219 08:29:36.123728 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:37 crc kubenswrapper[5023]: I0219 08:29:37.345209 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:37 crc kubenswrapper[5023]: I0219 08:29:37.683439 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:37 crc kubenswrapper[5023]: I0219 08:29:37.690459 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:37 crc kubenswrapper[5023]: I0219 08:29:37.805868 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:37 crc kubenswrapper[5023]: I0219 08:29:37.845295 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:38 crc kubenswrapper[5023]: I0219 08:29:38.293986 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:29:38 crc kubenswrapper[5023]: I0219 08:29:38.313367 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:29:38 crc kubenswrapper[5023]: I0219 08:29:38.595114 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.235833 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.260457 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.296575 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.327404 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.347169 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.347477 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-central-agent" containerID="cri-o://f3b8ec632030c716431bb872336dd561e8bf680f25dfe5ab90e1273041c29a3c" gracePeriod=30 Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.347534 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="sg-core" containerID="cri-o://57d145e0a4326404471bae9169f2f1e8fe27640b86bdda933667451f790daf91" gracePeriod=30 Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.347548 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="proxy-httpd" containerID="cri-o://ee0daf028a717d8e42c4e08ac95f6f11cefa1094c121ed8642cdfcd223bf193e" gracePeriod=30 Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.347585 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-notification-agent" containerID="cri-o://c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f" gracePeriod=30 Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.365188 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.222:3000/\": EOF" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.478579 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:29:39 crc kubenswrapper[5023]: E0219 08:29:39.478883 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:29:39 crc kubenswrapper[5023]: I0219 08:29:39.770796 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.042147 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306230 5023 generic.go:334] "Generic (PLEG): container finished" podID="d98a718d-963c-477d-875a-d0120df577a9" containerID="ee0daf028a717d8e42c4e08ac95f6f11cefa1094c121ed8642cdfcd223bf193e" exitCode=0 Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306595 5023 generic.go:334] "Generic (PLEG): container finished" podID="d98a718d-963c-477d-875a-d0120df577a9" containerID="57d145e0a4326404471bae9169f2f1e8fe27640b86bdda933667451f790daf91" exitCode=2 Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306612 5023 generic.go:334] "Generic (PLEG): container finished" podID="d98a718d-963c-477d-875a-d0120df577a9" containerID="f3b8ec632030c716431bb872336dd561e8bf680f25dfe5ab90e1273041c29a3c" exitCode=0 Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306312 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerDied","Data":"ee0daf028a717d8e42c4e08ac95f6f11cefa1094c121ed8642cdfcd223bf193e"} Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306683 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerDied","Data":"57d145e0a4326404471bae9169f2f1e8fe27640b86bdda933667451f790daf91"} Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.306698 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerDied","Data":"f3b8ec632030c716431bb872336dd561e8bf680f25dfe5ab90e1273041c29a3c"} Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.470039 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-4hlnq"] Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.471094 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.496051 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-4hlnq"] Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.567067 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-7861-account-create-update-mkrjf"] Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.568424 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.570662 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.576408 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-7861-account-create-update-mkrjf"] Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.620431 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5p6\" (UniqueName: \"kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.620683 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.723914 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5p6\" (UniqueName: \"kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.723979 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.724007 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsprg\" (UniqueName: \"kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.724261 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.725004 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.753365 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5p6\" (UniqueName: \"kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6\") pod \"cinder-db-create-4hlnq\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.787940 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.826138 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.826196 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsprg\" (UniqueName: \"kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.827430 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.861340 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsprg\" (UniqueName: \"kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg\") pod \"cinder-7861-account-create-update-mkrjf\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:40 crc kubenswrapper[5023]: I0219 08:29:40.886200 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:41 crc kubenswrapper[5023]: I0219 08:29:41.315390 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:41 crc kubenswrapper[5023]: I0219 08:29:41.572594 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-4hlnq"] Feb 19 08:29:41 crc kubenswrapper[5023]: W0219 08:29:41.575077 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod925dbd8c_6e2e_40fd_84d3_e61de27c7ad9.slice/crio-5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c WatchSource:0}: Error finding container 5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c: Status 404 returned error can't find the container with id 5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c Feb 19 08:29:41 crc kubenswrapper[5023]: I0219 08:29:41.696899 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-7861-account-create-update-mkrjf"] Feb 19 08:29:41 crc kubenswrapper[5023]: W0219 08:29:41.704602 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c8e32c_e771_4c02_bb99_51acdc7a231f.slice/crio-3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76 WatchSource:0}: Error finding container 3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76: Status 404 returned error can't find the container with id 3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76 Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.335888 5023 generic.go:334] "Generic (PLEG): container finished" podID="73c8e32c-e771-4c02-bb99-51acdc7a231f" containerID="45fd0549fca8bf9e41a822bbd236f05ae1e65262832ca1386fcddc759c0725eb" exitCode=0 Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.335946 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" event={"ID":"73c8e32c-e771-4c02-bb99-51acdc7a231f","Type":"ContainerDied","Data":"45fd0549fca8bf9e41a822bbd236f05ae1e65262832ca1386fcddc759c0725eb"} Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.336252 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" event={"ID":"73c8e32c-e771-4c02-bb99-51acdc7a231f","Type":"ContainerStarted","Data":"3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76"} Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.338023 5023 generic.go:334] "Generic (PLEG): container finished" podID="925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" containerID="4057d9c6bca934b43b47bbc183ac001df4882a5da76cbbd1337715d1bc21620a" exitCode=0 Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.338050 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-4hlnq" event={"ID":"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9","Type":"ContainerDied","Data":"4057d9c6bca934b43b47bbc183ac001df4882a5da76cbbd1337715d1bc21620a"} Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.338066 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-4hlnq" event={"ID":"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9","Type":"ContainerStarted","Data":"5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c"} Feb 19 08:29:42 crc kubenswrapper[5023]: I0219 08:29:42.497275 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:43 crc kubenswrapper[5023]: I0219 08:29:43.697277 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:43 crc kubenswrapper[5023]: I0219 08:29:43.867535 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:43 crc kubenswrapper[5023]: I0219 08:29:43.875512 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.018678 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts\") pod \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019041 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts\") pod \"73c8e32c-e771-4c02-bb99-51acdc7a231f\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019093 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" (UID: "925dbd8c-6e2e-40fd-84d3-e61de27c7ad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019309 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsprg\" (UniqueName: \"kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg\") pod \"73c8e32c-e771-4c02-bb99-51acdc7a231f\" (UID: \"73c8e32c-e771-4c02-bb99-51acdc7a231f\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019433 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5p6\" (UniqueName: \"kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6\") pod \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\" (UID: \"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019495 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73c8e32c-e771-4c02-bb99-51acdc7a231f" (UID: "73c8e32c-e771-4c02-bb99-51acdc7a231f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.019970 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73c8e32c-e771-4c02-bb99-51acdc7a231f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.020067 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.024797 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg" (OuterVolumeSpecName: "kube-api-access-zsprg") pod "73c8e32c-e771-4c02-bb99-51acdc7a231f" (UID: "73c8e32c-e771-4c02-bb99-51acdc7a231f"). InnerVolumeSpecName "kube-api-access-zsprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.025613 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6" (OuterVolumeSpecName: "kube-api-access-ft5p6") pod "925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" (UID: "925dbd8c-6e2e-40fd-84d3-e61de27c7ad9"). InnerVolumeSpecName "kube-api-access-ft5p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.121120 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsprg\" (UniqueName: \"kubernetes.io/projected/73c8e32c-e771-4c02-bb99-51acdc7a231f-kube-api-access-zsprg\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.121448 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5p6\" (UniqueName: \"kubernetes.io/projected/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9-kube-api-access-ft5p6\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: E0219 08:29:44.328951 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98a718d_963c_477d_875a_d0120df577a9.slice/crio-conmon-c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98a718d_963c_477d_875a_d0120df577a9.slice/crio-c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.365705 5023 generic.go:334] "Generic (PLEG): container finished" podID="d98a718d-963c-477d-875a-d0120df577a9" containerID="c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f" exitCode=0 Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.365784 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerDied","Data":"c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f"} Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.367200 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-4hlnq" event={"ID":"925dbd8c-6e2e-40fd-84d3-e61de27c7ad9","Type":"ContainerDied","Data":"5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c"} Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.367239 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-4hlnq" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.367251 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e45b401978a98a5bb2ef92945cb70e5b7161b8860ddb23742c3c0e70c8d173c" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.368463 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" event={"ID":"73c8e32c-e771-4c02-bb99-51acdc7a231f","Type":"ContainerDied","Data":"3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76"} Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.368593 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5283c9eb6b47d62466162df81ddf48b4df0e10e86e30c67ec36bec0914db76" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.368604 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7861-account-create-update-mkrjf" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.391954 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.528996 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529153 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529222 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2fkk\" (UniqueName: \"kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529240 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529285 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529307 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529396 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529449 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs\") pod \"d98a718d-963c-477d-875a-d0120df577a9\" (UID: \"d98a718d-963c-477d-875a-d0120df577a9\") " Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529576 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.529897 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.530044 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.533824 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts" (OuterVolumeSpecName: "scripts") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.533989 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk" (OuterVolumeSpecName: "kube-api-access-n2fkk") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "kube-api-access-n2fkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.552597 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.569274 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.588893 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.600378 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data" (OuterVolumeSpecName: "config-data") pod "d98a718d-963c-477d-875a-d0120df577a9" (UID: "d98a718d-963c-477d-875a-d0120df577a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632010 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2fkk\" (UniqueName: \"kubernetes.io/projected/d98a718d-963c-477d-875a-d0120df577a9-kube-api-access-n2fkk\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632355 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632371 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632383 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632394 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d98a718d-963c-477d-875a-d0120df577a9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632405 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.632419 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d98a718d-963c-477d-875a-d0120df577a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:29:44 crc kubenswrapper[5023]: I0219 08:29:44.916526 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.378419 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d98a718d-963c-477d-875a-d0120df577a9","Type":"ContainerDied","Data":"3e3f44a1f8e89b27ee504d08163af6af276f1c90aa2f1b3354dc2ff5d09a3825"} Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.378475 5023 scope.go:117] "RemoveContainer" containerID="ee0daf028a717d8e42c4e08ac95f6f11cefa1094c121ed8642cdfcd223bf193e" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.378699 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.404311 5023 scope.go:117] "RemoveContainer" containerID="57d145e0a4326404471bae9169f2f1e8fe27640b86bdda933667451f790daf91" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.410908 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.418354 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.432000 5023 scope.go:117] "RemoveContainer" containerID="c131f3a8272a9bc9e52900a41a40e3cecb6cd3d255a87efa6be6f848d785012f" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.436881 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437532 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="proxy-httpd" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437556 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="proxy-httpd" Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437563 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c8e32c-e771-4c02-bb99-51acdc7a231f" containerName="mariadb-account-create-update" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437570 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c8e32c-e771-4c02-bb99-51acdc7a231f" containerName="mariadb-account-create-update" Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437586 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="sg-core" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437592 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="sg-core" Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437644 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-notification-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437658 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-notification-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437670 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" containerName="mariadb-database-create" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437683 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" containerName="mariadb-database-create" Feb 19 08:29:45 crc kubenswrapper[5023]: E0219 08:29:45.437691 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-central-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.437697 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-central-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438714 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="sg-core" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438758 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" containerName="mariadb-database-create" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438781 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-central-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438797 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="proxy-httpd" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438808 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98a718d-963c-477d-875a-d0120df577a9" containerName="ceilometer-notification-agent" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.438820 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c8e32c-e771-4c02-bb99-51acdc7a231f" containerName="mariadb-account-create-update" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.441403 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.446518 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.447064 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.453781 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.458833 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.464926 5023 scope.go:117] "RemoveContainer" containerID="f3b8ec632030c716431bb872336dd561e8bf680f25dfe5ab90e1273041c29a3c" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.487663 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98a718d-963c-477d-875a-d0120df577a9" path="/var/lib/kubelet/pods/d98a718d-963c-477d-875a-d0120df577a9/volumes" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.547793 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.547889 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.547931 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.547976 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jktk\" (UniqueName: \"kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.549340 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.549435 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.549497 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.549545 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.650829 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.650901 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.650927 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.650949 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.650980 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.651212 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.651239 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.651261 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jktk\" (UniqueName: \"kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.653225 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.653582 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.659050 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.661318 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.662193 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.662731 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.664856 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.674782 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jktk\" (UniqueName: \"kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk\") pod \"ceilometer-0\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.773747 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.983609 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-7mxjf"] Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.986773 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.989532 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-64rj4" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.989599 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.989796 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Feb 19 08:29:45 crc kubenswrapper[5023]: I0219 08:29:45.996443 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-7mxjf"] Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.147674 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.158989 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.159208 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.159338 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.159475 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rsp\" (UniqueName: \"kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.159576 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.159686 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.230450 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261059 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261505 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261559 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261597 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261664 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261696 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6rsp\" (UniqueName: \"kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.261768 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.267156 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.267744 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.268076 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.269270 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.284229 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6rsp\" (UniqueName: \"kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp\") pod \"cinder-db-sync-7mxjf\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.304304 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.392988 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerStarted","Data":"d2d2051f85443ab614ceb2d8b1b5f9ac2bbd475991072c2a8419d2e977120bf4"} Feb 19 08:29:46 crc kubenswrapper[5023]: I0219 08:29:46.735324 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-7mxjf"] Feb 19 08:29:47 crc kubenswrapper[5023]: I0219 08:29:47.327723 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:47 crc kubenswrapper[5023]: I0219 08:29:47.412427 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" event={"ID":"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e","Type":"ContainerStarted","Data":"aa6482124354971e076fa01ac94a03e9a479f19b023bf9b77a2a99ecdbdfddce"} Feb 19 08:29:47 crc kubenswrapper[5023]: I0219 08:29:47.421396 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerStarted","Data":"be959a2cbae35d370913a26aeee9c32be3723c3a5d9cc416797e4691dd264086"} Feb 19 08:29:48 crc kubenswrapper[5023]: I0219 08:29:48.432676 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerStarted","Data":"c26116957b0124cf3f5ae089ede6dc1b933074af15c1c44d8943a1a182d22835"} Feb 19 08:29:48 crc kubenswrapper[5023]: I0219 08:29:48.521759 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:49 crc kubenswrapper[5023]: I0219 08:29:49.441262 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerStarted","Data":"70f81792498c56388b0f6481cb6e403307cf9462ffeb49de13a521c25905a9a2"} Feb 19 08:29:49 crc kubenswrapper[5023]: I0219 08:29:49.844826 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:51 crc kubenswrapper[5023]: I0219 08:29:51.123958 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:51 crc kubenswrapper[5023]: I0219 08:29:51.468236 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerStarted","Data":"e6a17b68acc5159d9af46a391a18695a20d4de15873000f08d041ead5b60b762"} Feb 19 08:29:51 crc kubenswrapper[5023]: I0219 08:29:51.468552 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:29:51 crc kubenswrapper[5023]: I0219 08:29:51.493304 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.018445783 podStartE2EDuration="6.493284022s" podCreationTimestamp="2026-02-19 08:29:45 +0000 UTC" firstStartedPulling="2026-02-19 08:29:46.235644459 +0000 UTC m=+1743.892763407" lastFinishedPulling="2026-02-19 08:29:50.710482698 +0000 UTC m=+1748.367601646" observedRunningTime="2026-02-19 08:29:51.489168123 +0000 UTC m=+1749.146287091" watchObservedRunningTime="2026-02-19 08:29:51.493284022 +0000 UTC m=+1749.150402970" Feb 19 08:29:52 crc kubenswrapper[5023]: I0219 08:29:52.353281 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:52 crc kubenswrapper[5023]: I0219 08:29:52.476819 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:29:52 crc kubenswrapper[5023]: E0219 08:29:52.477250 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:29:53 crc kubenswrapper[5023]: I0219 08:29:53.584750 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:54 crc kubenswrapper[5023]: I0219 08:29:54.751683 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:55 crc kubenswrapper[5023]: I0219 08:29:55.927732 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:57 crc kubenswrapper[5023]: I0219 08:29:57.120437 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:58 crc kubenswrapper[5023]: I0219 08:29:58.312953 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:29:59 crc kubenswrapper[5023]: I0219 08:29:59.520413 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.192374 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf"] Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.193804 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.199181 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.199467 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.213986 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf"] Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.363729 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.363802 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4zj\" (UniqueName: \"kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.363827 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.464891 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4zj\" (UniqueName: \"kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.464954 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.465914 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.465967 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.484888 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.487314 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4zj\" (UniqueName: \"kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj\") pod \"collect-profiles-29524830-qhhhf\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.527057 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:00 crc kubenswrapper[5023]: I0219 08:30:00.712525 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:01 crc kubenswrapper[5023]: I0219 08:30:01.913926 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:02 crc kubenswrapper[5023]: I0219 08:30:02.753262 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf"] Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.112887 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.593844 5023 generic.go:334] "Generic (PLEG): container finished" podID="caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" containerID="7b67b65a982073c7a15b429d026cf67ae690b558b840838121ef6cc499534a6b" exitCode=0 Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.593920 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" event={"ID":"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9","Type":"ContainerDied","Data":"7b67b65a982073c7a15b429d026cf67ae690b558b840838121ef6cc499534a6b"} Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.593953 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" event={"ID":"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9","Type":"ContainerStarted","Data":"d8fcd6bb8a33b9954f883e68ff66cb816cf5daef7e1d6c86b7a65dbafbab0e3a"} Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.596308 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" event={"ID":"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e","Type":"ContainerStarted","Data":"d782967d86d08dab08a3b9f0e1f1b25fcb938b2ce84606b8a74e55d2fdc451ca"} Feb 19 08:30:03 crc kubenswrapper[5023]: I0219 08:30:03.635859 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" podStartSLOduration=2.998547639 podStartE2EDuration="18.635833358s" podCreationTimestamp="2026-02-19 08:29:45 +0000 UTC" firstStartedPulling="2026-02-19 08:29:46.732088681 +0000 UTC m=+1744.389207629" lastFinishedPulling="2026-02-19 08:30:02.3693744 +0000 UTC m=+1760.026493348" observedRunningTime="2026-02-19 08:30:03.630074854 +0000 UTC m=+1761.287193802" watchObservedRunningTime="2026-02-19 08:30:03.635833358 +0000 UTC m=+1761.292952306" Feb 19 08:30:04 crc kubenswrapper[5023]: I0219 08:30:04.318036 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:04 crc kubenswrapper[5023]: I0219 08:30:04.968910 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.047920 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume\") pod \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.047995 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume\") pod \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.048038 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4zj\" (UniqueName: \"kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj\") pod \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\" (UID: \"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9\") " Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.048921 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume" (OuterVolumeSpecName: "config-volume") pod "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" (UID: "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.059803 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" (UID: "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.059866 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj" (OuterVolumeSpecName: "kube-api-access-8f4zj") pod "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" (UID: "caf9e6b8-79b5-4c2b-b45a-be85b8aaece9"). InnerVolumeSpecName "kube-api-access-8f4zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.149954 5023 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.149990 5023 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.150001 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4zj\" (UniqueName: \"kubernetes.io/projected/caf9e6b8-79b5-4c2b-b45a-be85b8aaece9-kube-api-access-8f4zj\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.539526 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.613634 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" event={"ID":"caf9e6b8-79b5-4c2b-b45a-be85b8aaece9","Type":"ContainerDied","Data":"d8fcd6bb8a33b9954f883e68ff66cb816cf5daef7e1d6c86b7a65dbafbab0e3a"} Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.613700 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fcd6bb8a33b9954f883e68ff66cb816cf5daef7e1d6c86b7a65dbafbab0e3a" Feb 19 08:30:05 crc kubenswrapper[5023]: I0219 08:30:05.613765 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524830-qhhhf" Feb 19 08:30:06 crc kubenswrapper[5023]: I0219 08:30:06.726697 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:07 crc kubenswrapper[5023]: I0219 08:30:07.476901 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:30:07 crc kubenswrapper[5023]: E0219 08:30:07.477204 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:30:07 crc kubenswrapper[5023]: I0219 08:30:07.925444 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:08 crc kubenswrapper[5023]: I0219 08:30:08.638670 5023 generic.go:334] "Generic (PLEG): container finished" podID="47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" containerID="d782967d86d08dab08a3b9f0e1f1b25fcb938b2ce84606b8a74e55d2fdc451ca" exitCode=0 Feb 19 08:30:08 crc kubenswrapper[5023]: I0219 08:30:08.638715 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" event={"ID":"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e","Type":"ContainerDied","Data":"d782967d86d08dab08a3b9f0e1f1b25fcb938b2ce84606b8a74e55d2fdc451ca"} Feb 19 08:30:09 crc kubenswrapper[5023]: I0219 08:30:09.110480 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:09 crc kubenswrapper[5023]: I0219 08:30:09.981609 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.134758 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.134958 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6rsp\" (UniqueName: \"kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.134993 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.135031 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.135047 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.135074 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id\") pod \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\" (UID: \"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e\") " Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.135410 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.141409 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts" (OuterVolumeSpecName: "scripts") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.141421 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp" (OuterVolumeSpecName: "kube-api-access-g6rsp") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "kube-api-access-g6rsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.141527 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.165764 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.185830 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data" (OuterVolumeSpecName: "config-data") pod "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" (UID: "47f0281e-378e-4f3d-bfa4-3d8ac1ec026e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237550 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6rsp\" (UniqueName: \"kubernetes.io/projected/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-kube-api-access-g6rsp\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237605 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237668 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237688 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237707 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.237725 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.289091 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.655859 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" event={"ID":"47f0281e-378e-4f3d-bfa4-3d8ac1ec026e","Type":"ContainerDied","Data":"aa6482124354971e076fa01ac94a03e9a479f19b023bf9b77a2a99ecdbdfddce"} Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.656109 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa6482124354971e076fa01ac94a03e9a479f19b023bf9b77a2a99ecdbdfddce" Feb 19 08:30:10 crc kubenswrapper[5023]: I0219 08:30:10.656218 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-7mxjf" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.029940 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: E0219 08:30:11.030550 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" containerName="collect-profiles" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.030674 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" containerName="collect-profiles" Feb 19 08:30:11 crc kubenswrapper[5023]: E0219 08:30:11.030766 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" containerName="cinder-db-sync" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.030840 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" containerName="cinder-db-sync" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.031058 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" containerName="cinder-db-sync" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.031136 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf9e6b8-79b5-4c2b-b45a-be85b8aaece9" containerName="collect-profiles" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.032233 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.035143 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.035364 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.041955 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-64rj4" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.042349 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.050134 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.051766 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.055316 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.058883 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.088712 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.150703 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5ds\" (UniqueName: \"kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.151788 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qprx\" (UniqueName: \"kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.151941 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152090 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152240 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152304 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152367 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152513 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152551 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152607 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152727 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152775 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152817 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152867 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152898 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152925 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152946 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152970 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.152996 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.153024 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.153065 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.153096 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.153133 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.254974 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255233 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255336 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255412 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255541 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255542 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255745 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255825 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255877 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.255817 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256006 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256061 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256162 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256246 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256325 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256399 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256473 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256546 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256635 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256723 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256799 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256882 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.256961 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257093 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257188 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257262 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5ds\" (UniqueName: \"kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257351 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qprx\" (UniqueName: \"kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257408 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257741 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257900 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.259687 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.262726 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.264363 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.257377 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.264732 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.264810 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.266295 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.268174 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.268205 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.269121 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.269827 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.273117 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.273946 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.274699 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.275529 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.280762 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.283829 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.285443 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qprx\" (UniqueName: \"kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx\") pod \"cinder-backup-0\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.287533 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.292538 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5ds\" (UniqueName: \"kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds\") pod \"cinder-scheduler-0\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358585 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358652 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358690 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358710 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358724 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358758 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr9m\" (UniqueName: \"kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358793 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.358947 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.359589 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.375516 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465548 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465610 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465674 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465709 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465745 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465784 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr9m\" (UniqueName: \"kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465807 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.465836 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.466396 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.471376 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.471466 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.473373 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.480080 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.481210 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.482184 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.502854 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.507598 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr9m\" (UniqueName: \"kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m\") pod \"cinder-api-0\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.700031 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:11 crc kubenswrapper[5023]: W0219 08:30:11.882590 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda78c9bf8_8035_4425_b25b_ed73bdefb753.slice/crio-cc2de950e879c92f5a0e5308c6e35fa1b7bb00e04e6ff704fb0553d7110dfc1b WatchSource:0}: Error finding container cc2de950e879c92f5a0e5308c6e35fa1b7bb00e04e6ff704fb0553d7110dfc1b: Status 404 returned error can't find the container with id cc2de950e879c92f5a0e5308c6e35fa1b7bb00e04e6ff704fb0553d7110dfc1b Feb 19 08:30:11 crc kubenswrapper[5023]: I0219 08:30:11.883438 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.220635 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.369097 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.700064 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerStarted","Data":"30611173b7d8af57ff0a823b69686d750700816bd1645019f29528886dc0739a"} Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.710490 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerStarted","Data":"cc2de950e879c92f5a0e5308c6e35fa1b7bb00e04e6ff704fb0553d7110dfc1b"} Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.719372 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerStarted","Data":"e5c065f9c9add0763d1cbc88d163d2c5a471d019ff7e06b48e7225b101dcd210"} Feb 19 08:30:12 crc kubenswrapper[5023]: I0219 08:30:12.722332 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:13 crc kubenswrapper[5023]: I0219 08:30:13.764716 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerStarted","Data":"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9"} Feb 19 08:30:13 crc kubenswrapper[5023]: I0219 08:30:13.770810 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerStarted","Data":"e7dfa4c2164bc4654e99dc79d539630cb497f219929c8508e0f2ba8d4da5bf96"} Feb 19 08:30:13 crc kubenswrapper[5023]: I0219 08:30:13.997489 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.151414 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.785699 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerStarted","Data":"741ec6f4d90aeb4f52ef3845915adc8c42031cd01b7cd642a7d58433800eddaf"} Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.786023 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerStarted","Data":"a5531e02aadeb4ae75e0c7cfb89ff323602e887cb715081bad63b403f53288af"} Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.791426 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerStarted","Data":"7c0dd9551021dd59e5e7b50e22a82d03a8441cf8e7ec0baaae44478cc64ad435"} Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.791524 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api-log" containerID="cri-o://e7dfa4c2164bc4654e99dc79d539630cb497f219929c8508e0f2ba8d4da5bf96" gracePeriod=30 Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.791547 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.791595 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api" containerID="cri-o://7c0dd9551021dd59e5e7b50e22a82d03a8441cf8e7ec0baaae44478cc64ad435" gracePeriod=30 Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.798862 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerStarted","Data":"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08"} Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.825575 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.505138361 podStartE2EDuration="3.825555964s" podCreationTimestamp="2026-02-19 08:30:11 +0000 UTC" firstStartedPulling="2026-02-19 08:30:12.23363908 +0000 UTC m=+1769.890758028" lastFinishedPulling="2026-02-19 08:30:13.554056683 +0000 UTC m=+1771.211175631" observedRunningTime="2026-02-19 08:30:14.817300514 +0000 UTC m=+1772.474419472" watchObservedRunningTime="2026-02-19 08:30:14.825555964 +0000 UTC m=+1772.482674912" Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.884217 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.884192584 podStartE2EDuration="3.884192584s" podCreationTimestamp="2026-02-19 08:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:30:14.881916994 +0000 UTC m=+1772.539035942" watchObservedRunningTime="2026-02-19 08:30:14.884192584 +0000 UTC m=+1772.541311532" Feb 19 08:30:14 crc kubenswrapper[5023]: I0219 08:30:14.887231 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.311285477 podStartE2EDuration="3.887216155s" podCreationTimestamp="2026-02-19 08:30:11 +0000 UTC" firstStartedPulling="2026-02-19 08:30:11.898893141 +0000 UTC m=+1769.556012089" lastFinishedPulling="2026-02-19 08:30:12.474823819 +0000 UTC m=+1770.131942767" observedRunningTime="2026-02-19 08:30:14.864153271 +0000 UTC m=+1772.521272219" watchObservedRunningTime="2026-02-19 08:30:14.887216155 +0000 UTC m=+1772.544335103" Feb 19 08:30:15 crc kubenswrapper[5023]: I0219 08:30:15.190455 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:15 crc kubenswrapper[5023]: I0219 08:30:15.802786 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:15 crc kubenswrapper[5023]: I0219 08:30:15.809457 5023 generic.go:334] "Generic (PLEG): container finished" podID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerID="e7dfa4c2164bc4654e99dc79d539630cb497f219929c8508e0f2ba8d4da5bf96" exitCode=143 Feb 19 08:30:15 crc kubenswrapper[5023]: I0219 08:30:15.809589 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerDied","Data":"e7dfa4c2164bc4654e99dc79d539630cb497f219929c8508e0f2ba8d4da5bf96"} Feb 19 08:30:16 crc kubenswrapper[5023]: I0219 08:30:16.359243 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:16 crc kubenswrapper[5023]: I0219 08:30:16.360167 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:16 crc kubenswrapper[5023]: I0219 08:30:16.376680 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:17 crc kubenswrapper[5023]: I0219 08:30:17.562659 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:18 crc kubenswrapper[5023]: I0219 08:30:18.736666 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:19 crc kubenswrapper[5023]: I0219 08:30:19.901161 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.107213 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.577078 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.632213 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.648423 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.699892 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.871743 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="cinder-scheduler" containerID="cri-o://e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9" gracePeriod=30 Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.871978 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="cinder-backup" containerID="cri-o://a5531e02aadeb4ae75e0c7cfb89ff323602e887cb715081bad63b403f53288af" gracePeriod=30 Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.872384 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="probe" containerID="cri-o://371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08" gracePeriod=30 Feb 19 08:30:21 crc kubenswrapper[5023]: I0219 08:30:21.872442 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="probe" containerID="cri-o://741ec6f4d90aeb4f52ef3845915adc8c42031cd01b7cd642a7d58433800eddaf" gracePeriod=30 Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.332379 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.476869 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:30:22 crc kubenswrapper[5023]: E0219 08:30:22.477120 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.881494 5023 generic.go:334] "Generic (PLEG): container finished" podID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerID="371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08" exitCode=0 Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.881586 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerDied","Data":"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08"} Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.883761 5023 generic.go:334] "Generic (PLEG): container finished" podID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerID="741ec6f4d90aeb4f52ef3845915adc8c42031cd01b7cd642a7d58433800eddaf" exitCode=0 Feb 19 08:30:22 crc kubenswrapper[5023]: I0219 08:30:22.883811 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerDied","Data":"741ec6f4d90aeb4f52ef3845915adc8c42031cd01b7cd642a7d58433800eddaf"} Feb 19 08:30:23 crc kubenswrapper[5023]: I0219 08:30:23.278526 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:23 crc kubenswrapper[5023]: I0219 08:30:23.278870 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerName="watcher-decision-engine" containerID="cri-o://dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" gracePeriod=30 Feb 19 08:30:23 crc kubenswrapper[5023]: I0219 08:30:23.538824 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:23 crc kubenswrapper[5023]: I0219 08:30:23.716442 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.215614 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.215911 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-central-agent" containerID="cri-o://be959a2cbae35d370913a26aeee9c32be3723c3a5d9cc416797e4691dd264086" gracePeriod=30 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.216001 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="sg-core" containerID="cri-o://70f81792498c56388b0f6481cb6e403307cf9462ffeb49de13a521c25905a9a2" gracePeriod=30 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.216000 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-notification-agent" containerID="cri-o://c26116957b0124cf3f5ae089ede6dc1b933074af15c1c44d8943a1a182d22835" gracePeriod=30 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.216000 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="proxy-httpd" containerID="cri-o://e6a17b68acc5159d9af46a391a18695a20d4de15873000f08d041ead5b60b762" gracePeriod=30 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.724566 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906073 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerID="e6a17b68acc5159d9af46a391a18695a20d4de15873000f08d041ead5b60b762" exitCode=0 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906125 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerID="70f81792498c56388b0f6481cb6e403307cf9462ffeb49de13a521c25905a9a2" exitCode=2 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906133 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerID="be959a2cbae35d370913a26aeee9c32be3723c3a5d9cc416797e4691dd264086" exitCode=0 Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906168 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerDied","Data":"e6a17b68acc5159d9af46a391a18695a20d4de15873000f08d041ead5b60b762"} Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906248 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerDied","Data":"70f81792498c56388b0f6481cb6e403307cf9462ffeb49de13a521c25905a9a2"} Feb 19 08:30:24 crc kubenswrapper[5023]: I0219 08:30:24.906270 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerDied","Data":"be959a2cbae35d370913a26aeee9c32be3723c3a5d9cc416797e4691dd264086"} Feb 19 08:30:25 crc kubenswrapper[5023]: I0219 08:30:25.958504 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.650807 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708521 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708591 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk5ds\" (UniqueName: \"kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708655 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708764 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708792 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708905 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.708939 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id\") pod \"a78c9bf8-8035-4425-b25b-ed73bdefb753\" (UID: \"a78c9bf8-8035-4425-b25b-ed73bdefb753\") " Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.709453 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.716922 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds" (OuterVolumeSpecName: "kube-api-access-lk5ds") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "kube-api-access-lk5ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.717062 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts" (OuterVolumeSpecName: "scripts") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.717486 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.779678 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.811612 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.811659 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk5ds\" (UniqueName: \"kubernetes.io/projected/a78c9bf8-8035-4425-b25b-ed73bdefb753-kube-api-access-lk5ds\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.811669 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.811677 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.811685 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a78c9bf8-8035-4425-b25b-ed73bdefb753-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.858781 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data" (OuterVolumeSpecName: "config-data") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.903987 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a78c9bf8-8035-4425-b25b-ed73bdefb753" (UID: "a78c9bf8-8035-4425-b25b-ed73bdefb753"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.913590 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.913646 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78c9bf8-8035-4425-b25b-ed73bdefb753-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.927431 5023 generic.go:334] "Generic (PLEG): container finished" podID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerID="e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9" exitCode=0 Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.927482 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerDied","Data":"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9"} Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.927550 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"a78c9bf8-8035-4425-b25b-ed73bdefb753","Type":"ContainerDied","Data":"cc2de950e879c92f5a0e5308c6e35fa1b7bb00e04e6ff704fb0553d7110dfc1b"} Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.927505 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.927587 5023 scope.go:117] "RemoveContainer" containerID="371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08" Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.931790 5023 generic.go:334] "Generic (PLEG): container finished" podID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerID="a5531e02aadeb4ae75e0c7cfb89ff323602e887cb715081bad63b403f53288af" exitCode=0 Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.931823 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerDied","Data":"a5531e02aadeb4ae75e0c7cfb89ff323602e887cb715081bad63b403f53288af"} Feb 19 08:30:26 crc kubenswrapper[5023]: I0219 08:30:26.975990 5023 scope.go:117] "RemoveContainer" containerID="e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.012536 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.026107 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.026461 5023 scope.go:117] "RemoveContainer" containerID="371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08" Feb 19 08:30:27 crc kubenswrapper[5023]: E0219 08:30:27.027040 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08\": container with ID starting with 371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08 not found: ID does not exist" containerID="371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.027094 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08"} err="failed to get container status \"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08\": rpc error: code = NotFound desc = could not find container \"371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08\": container with ID starting with 371190b7b47d144c957d2806b878f598cfdb41a285891e74a9e2a62e04eced08 not found: ID does not exist" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.027124 5023 scope.go:117] "RemoveContainer" containerID="e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9" Feb 19 08:30:27 crc kubenswrapper[5023]: E0219 08:30:27.027402 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9\": container with ID starting with e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9 not found: ID does not exist" containerID="e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.027485 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9"} err="failed to get container status \"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9\": rpc error: code = NotFound desc = could not find container \"e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9\": container with ID starting with e19d8ad407c1cd1bbe85504e2a1c30cfdc2e2f5bfa8f09a23c711dd520ab9ce9 not found: ID does not exist" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.041949 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:27 crc kubenswrapper[5023]: E0219 08:30:27.042559 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="cinder-scheduler" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.042653 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="cinder-scheduler" Feb 19 08:30:27 crc kubenswrapper[5023]: E0219 08:30:27.042736 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="probe" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.042802 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="probe" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.043005 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="cinder-scheduler" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.043097 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" containerName="probe" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.044056 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.046609 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.056394 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.119728 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120027 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120171 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120258 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120367 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120540 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82g9v\" (UniqueName: \"kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.120692 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.148821 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.209006 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221599 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221671 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221720 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221767 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221774 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys" (OuterVolumeSpecName: "sys") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221807 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221866 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221911 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221907 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221942 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.221961 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run" (OuterVolumeSpecName: "run") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222047 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222074 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222121 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222176 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qprx\" (UniqueName: \"kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222203 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222242 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222311 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222362 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222404 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222468 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id\") pod \"d601af74-bd02-4e31-baf2-b27019bb5b71\" (UID: \"d601af74-bd02-4e31-baf2-b27019bb5b71\") " Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222573 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222571 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev" (OuterVolumeSpecName: "dev") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222670 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.222577 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223008 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223525 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223667 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223704 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223830 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223891 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82g9v\" (UniqueName: \"kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.223998 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224131 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224215 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224356 5023 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-dev\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224388 5023 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224404 5023 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224419 5023 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224434 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224447 5023 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-sys\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224458 5023 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224470 5023 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224483 5023 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.224496 5023 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d601af74-bd02-4e31-baf2-b27019bb5b71-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.225386 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx" (OuterVolumeSpecName: "kube-api-access-4qprx") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "kube-api-access-4qprx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.228813 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts" (OuterVolumeSpecName: "scripts") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.229419 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.229616 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.230488 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.233083 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.236077 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.236951 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.252971 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82g9v\" (UniqueName: \"kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v\") pod \"cinder-scheduler-0\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.291176 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.326143 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.326180 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.326189 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qprx\" (UniqueName: \"kubernetes.io/projected/d601af74-bd02-4e31-baf2-b27019bb5b71-kube-api-access-4qprx\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.326197 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.345119 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data" (OuterVolumeSpecName: "config-data") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.373399 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d601af74-bd02-4e31-baf2-b27019bb5b71" (UID: "d601af74-bd02-4e31-baf2-b27019bb5b71"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.427876 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.427917 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d601af74-bd02-4e31-baf2-b27019bb5b71-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.444571 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.496917 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78c9bf8-8035-4425-b25b-ed73bdefb753" path="/var/lib/kubelet/pods/a78c9bf8-8035-4425-b25b-ed73bdefb753/volumes" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.945314 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"d601af74-bd02-4e31-baf2-b27019bb5b71","Type":"ContainerDied","Data":"e5c065f9c9add0763d1cbc88d163d2c5a471d019ff7e06b48e7225b101dcd210"} Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.945855 5023 scope.go:117] "RemoveContainer" containerID="741ec6f4d90aeb4f52ef3845915adc8c42031cd01b7cd642a7d58433800eddaf" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.945608 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.976658 5023 scope.go:117] "RemoveContainer" containerID="a5531e02aadeb4ae75e0c7cfb89ff323602e887cb715081bad63b403f53288af" Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.976842 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:27 crc kubenswrapper[5023]: I0219 08:30:27.982952 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.002828 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.010986 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:28 crc kubenswrapper[5023]: E0219 08:30:28.011339 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="cinder-backup" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.011360 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="cinder-backup" Feb 19 08:30:28 crc kubenswrapper[5023]: E0219 08:30:28.028825 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="probe" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.028872 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="probe" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.029246 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="cinder-backup" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.029265 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" containerName="probe" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.030185 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.041839 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.042230 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146605 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146680 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvfq\" (UniqueName: \"kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146745 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146769 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146791 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146816 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146838 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146868 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146881 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146915 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146955 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.146977 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.147011 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.147041 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.147065 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.147087 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248077 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248338 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248362 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248382 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248400 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248428 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248442 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248472 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248533 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248548 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248567 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248584 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248605 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248636 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248654 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248670 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvfq\" (UniqueName: \"kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.248994 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.249473 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.249555 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.249602 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.249644 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.250488 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.250913 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.250956 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.250988 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.251023 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.252353 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.252412 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.254315 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.255165 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.258823 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.265296 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvfq\" (UniqueName: \"kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq\") pod \"cinder-backup-0\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.414838 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.498194 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.966330 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.973498 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerStarted","Data":"3e6e6daaace73bee93c2c66b23bca69f4fa927d4910a4b075718a81766441064"} Feb 19 08:30:28 crc kubenswrapper[5023]: I0219 08:30:28.973569 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerStarted","Data":"6031188faee7307855a7c8800499901154b8569ee4b782afe0dffa3153437d31"} Feb 19 08:30:29 crc kubenswrapper[5023]: E0219 08:30:29.290122 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:30:29 crc kubenswrapper[5023]: E0219 08:30:29.321539 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:30:29 crc kubenswrapper[5023]: E0219 08:30:29.327914 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 08:30:29 crc kubenswrapper[5023]: E0219 08:30:29.327976 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerName="watcher-decision-engine" Feb 19 08:30:29 crc kubenswrapper[5023]: I0219 08:30:29.516792 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d601af74-bd02-4e31-baf2-b27019bb5b71" path="/var/lib/kubelet/pods/d601af74-bd02-4e31-baf2-b27019bb5b71/volumes" Feb 19 08:30:29 crc kubenswrapper[5023]: I0219 08:30:29.605991 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.002448 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerStarted","Data":"27733719e1d3cadb779ba93ace5f7511df0402a64de51d4138273b80d5832b4e"} Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.004202 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerStarted","Data":"6f689d956cc4b5d702fde8897d651b4c4937657ecfa109bb0284883de8e0287c"} Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.004240 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerStarted","Data":"eb5b94302180aa38d0b09bc233e682ed8f8ce4054c320ba0cf321af329f5cd1f"} Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.004251 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerStarted","Data":"9fa344343f55882f98850fa8b20e26b9e80d07358aecc96ecdf268ab7b86771c"} Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.057544 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=4.057519134 podStartE2EDuration="4.057519134s" podCreationTimestamp="2026-02-19 08:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:30:30.030510135 +0000 UTC m=+1787.687629083" watchObservedRunningTime="2026-02-19 08:30:30.057519134 +0000 UTC m=+1787.714638082" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.057700 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=3.057694788 podStartE2EDuration="3.057694788s" podCreationTimestamp="2026-02-19 08:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:30:30.055748377 +0000 UTC m=+1787.712867325" watchObservedRunningTime="2026-02-19 08:30:30.057694788 +0000 UTC m=+1787.714813736" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.376185 5023 scope.go:117] "RemoveContainer" containerID="1cf851f02571033dc5d4ea5899be72d10c06d98f0fc873694134a7db5400e1f6" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.421134 5023 scope.go:117] "RemoveContainer" containerID="95415aa5565d7eb4dede9db3b3910be8a304e6a497f69508fcd75cff277d8935" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.445505 5023 scope.go:117] "RemoveContainer" containerID="aa2bb14052a3c75a2cc5eecc10d084e351ff76c1e65278fb9608315662e5acfa" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.493477 5023 scope.go:117] "RemoveContainer" containerID="c73edfc2b2711e55eafc83c17bc03f63cf8ae448159b7f97c999403af864b6c0" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.516864 5023 scope.go:117] "RemoveContainer" containerID="488c24ed5b3be5df8a02d168d439d1e71d7a53b26e50cf88ec6103ba918c8021" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.559966 5023 scope.go:117] "RemoveContainer" containerID="d0e0cc640f1883672496b2b7b98cb59f1bc10d053f6a836606886f7b12748483" Feb 19 08:30:30 crc kubenswrapper[5023]: I0219 08:30:30.813950 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.024610 5023 generic.go:334] "Generic (PLEG): container finished" podID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerID="c26116957b0124cf3f5ae089ede6dc1b933074af15c1c44d8943a1a182d22835" exitCode=0 Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.025661 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerDied","Data":"c26116957b0124cf3f5ae089ede6dc1b933074af15c1c44d8943a1a182d22835"} Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.119491 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.216097 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jktk\" (UniqueName: \"kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217307 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217377 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217441 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217471 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217502 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217541 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.217584 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs\") pod \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\" (UID: \"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97\") " Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.218217 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.218896 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.225075 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk" (OuterVolumeSpecName: "kube-api-access-6jktk") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "kube-api-access-6jktk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.225345 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts" (OuterVolumeSpecName: "scripts") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.245727 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.273103 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.295673 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322406 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322439 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322448 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322462 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322471 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322485 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jktk\" (UniqueName: \"kubernetes.io/projected/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-kube-api-access-6jktk\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322495 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.322911 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data" (OuterVolumeSpecName: "config-data") pod "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" (UID: "dd0acf90-9fae-4fe0-a75f-4aa932cd1c97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:31 crc kubenswrapper[5023]: I0219 08:30:31.424262 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.037205 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dd0acf90-9fae-4fe0-a75f-4aa932cd1c97","Type":"ContainerDied","Data":"d2d2051f85443ab614ceb2d8b1b5f9ac2bbd475991072c2a8419d2e977120bf4"} Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.037254 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.037552 5023 scope.go:117] "RemoveContainer" containerID="e6a17b68acc5159d9af46a391a18695a20d4de15873000f08d041ead5b60b762" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.076531 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.077932 5023 scope.go:117] "RemoveContainer" containerID="70f81792498c56388b0f6481cb6e403307cf9462ffeb49de13a521c25905a9a2" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.087939 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.096736 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.109789 5023 scope.go:117] "RemoveContainer" containerID="c26116957b0124cf3f5ae089ede6dc1b933074af15c1c44d8943a1a182d22835" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.114383 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:32 crc kubenswrapper[5023]: E0219 08:30:32.115070 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="sg-core" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115095 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="sg-core" Feb 19 08:30:32 crc kubenswrapper[5023]: E0219 08:30:32.115115 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="proxy-httpd" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115135 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="proxy-httpd" Feb 19 08:30:32 crc kubenswrapper[5023]: E0219 08:30:32.115145 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-notification-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115152 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-notification-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: E0219 08:30:32.115173 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-central-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115181 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-central-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115416 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-central-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115481 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="sg-core" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115501 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="proxy-httpd" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.115512 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" containerName="ceilometer-notification-agent" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.117556 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.123652 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.123886 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.124156 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.150000 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.185848 5023 scope.go:117] "RemoveContainer" containerID="be959a2cbae35d370913a26aeee9c32be3723c3a5d9cc416797e4691dd264086" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242226 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242278 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242321 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242457 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242562 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242610 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.242847 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5k4\" (UniqueName: \"kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.243048 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344198 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344275 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344301 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344333 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344359 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344394 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344418 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.344443 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5k4\" (UniqueName: \"kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.345020 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.345258 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.352431 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.352612 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.353374 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.354680 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.355256 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.359210 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5k4\" (UniqueName: \"kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4\") pod \"ceilometer-0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.445266 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.476923 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:32 crc kubenswrapper[5023]: I0219 08:30:32.927733 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:33 crc kubenswrapper[5023]: I0219 08:30:33.046782 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerStarted","Data":"98230f8bb4fb416be8e9094165946b4f71011372cfda79967e472ade416ac873"} Feb 19 08:30:33 crc kubenswrapper[5023]: I0219 08:30:33.327179 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:33 crc kubenswrapper[5023]: I0219 08:30:33.506727 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0acf90-9fae-4fe0-a75f-4aa932cd1c97" path="/var/lib/kubelet/pods/dd0acf90-9fae-4fe0-a75f-4aa932cd1c97/volumes" Feb 19 08:30:33 crc kubenswrapper[5023]: I0219 08:30:33.507561 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:34 crc kubenswrapper[5023]: I0219 08:30:34.059558 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerStarted","Data":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} Feb 19 08:30:34 crc kubenswrapper[5023]: I0219 08:30:34.605244 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:35 crc kubenswrapper[5023]: I0219 08:30:35.067757 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerStarted","Data":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} Feb 19 08:30:35 crc kubenswrapper[5023]: I0219 08:30:35.067802 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerStarted","Data":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} Feb 19 08:30:35 crc kubenswrapper[5023]: I0219 08:30:35.774774 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:36 crc kubenswrapper[5023]: I0219 08:30:36.939290 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:37 crc kubenswrapper[5023]: I0219 08:30:37.087114 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerStarted","Data":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} Feb 19 08:30:37 crc kubenswrapper[5023]: I0219 08:30:37.087383 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:37 crc kubenswrapper[5023]: I0219 08:30:37.114859 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.785363951 podStartE2EDuration="5.114840385s" podCreationTimestamp="2026-02-19 08:30:32 +0000 UTC" firstStartedPulling="2026-02-19 08:30:32.942094277 +0000 UTC m=+1790.599213235" lastFinishedPulling="2026-02-19 08:30:36.271570721 +0000 UTC m=+1793.928689669" observedRunningTime="2026-02-19 08:30:37.113361506 +0000 UTC m=+1794.770480464" watchObservedRunningTime="2026-02-19 08:30:37.114840385 +0000 UTC m=+1794.771959333" Feb 19 08:30:37 crc kubenswrapper[5023]: I0219 08:30:37.481596 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:30:37 crc kubenswrapper[5023]: E0219 08:30:37.482493 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:30:37 crc kubenswrapper[5023]: I0219 08:30:37.683552 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.150101 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3e943a47-fba0-42a0-9ef7-f9f677a48428/watcher-decision-engine/0.log" Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.758076 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.911987 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.961836 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.961887 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chhbh\" (UniqueName: \"kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.962050 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.962071 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.962106 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.962150 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls\") pod \"3e943a47-fba0-42a0-9ef7-f9f677a48428\" (UID: \"3e943a47-fba0-42a0-9ef7-f9f677a48428\") " Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.968971 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs" (OuterVolumeSpecName: "logs") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:38 crc kubenswrapper[5023]: I0219 08:30:38.988356 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh" (OuterVolumeSpecName: "kube-api-access-chhbh") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "kube-api-access-chhbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:38.994748 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.017717 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data" (OuterVolumeSpecName: "config-data") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.049833 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.063921 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.063964 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chhbh\" (UniqueName: \"kubernetes.io/projected/3e943a47-fba0-42a0-9ef7-f9f677a48428-kube-api-access-chhbh\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.063979 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.063993 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.064004 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e943a47-fba0-42a0-9ef7-f9f677a48428-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.071036 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3e943a47-fba0-42a0-9ef7-f9f677a48428" (UID: "3e943a47-fba0-42a0-9ef7-f9f677a48428"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.104628 5023 generic.go:334] "Generic (PLEG): container finished" podID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" exitCode=0 Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.104688 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e943a47-fba0-42a0-9ef7-f9f677a48428","Type":"ContainerDied","Data":"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649"} Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.104731 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e943a47-fba0-42a0-9ef7-f9f677a48428","Type":"ContainerDied","Data":"cd791fdc675dbb146d0a644e10b22dbc280f9de2b255424a2a31781980fd5247"} Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.104726 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.104749 5023 scope.go:117] "RemoveContainer" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.123146 5023 scope.go:117] "RemoveContainer" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" Feb 19 08:30:39 crc kubenswrapper[5023]: E0219 08:30:39.123685 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649\": container with ID starting with dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649 not found: ID does not exist" containerID="dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.123805 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649"} err="failed to get container status \"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649\": rpc error: code = NotFound desc = could not find container \"dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649\": container with ID starting with dbfb40d385eec9f54fe91453ca93e6ce0b933559a74e2240d78d805b0bec4649 not found: ID does not exist" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.135795 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.141851 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.166177 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e943a47-fba0-42a0-9ef7-f9f677a48428-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.168059 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:39 crc kubenswrapper[5023]: E0219 08:30:39.168466 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerName="watcher-decision-engine" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.168491 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerName="watcher-decision-engine" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.168694 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" containerName="watcher-decision-engine" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.169678 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.172630 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.181570 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268015 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268065 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268093 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268137 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268305 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.268357 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvd2c\" (UniqueName: \"kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.369874 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370156 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370279 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370419 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370604 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370717 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.370883 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvd2c\" (UniqueName: \"kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.373736 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.374000 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.374078 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.375363 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.392268 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvd2c\" (UniqueName: \"kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.485176 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.488675 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e943a47-fba0-42a0-9ef7-f9f677a48428" path="/var/lib/kubelet/pods/3e943a47-fba0-42a0-9ef7-f9f677a48428/volumes" Feb 19 08:30:39 crc kubenswrapper[5023]: I0219 08:30:39.912355 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:39 crc kubenswrapper[5023]: W0219 08:30:39.913789 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23e9a749_d85c_4f75_bb88_5e18bedd8b15.slice/crio-7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41 WatchSource:0}: Error finding container 7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41: Status 404 returned error can't find the container with id 7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41 Feb 19 08:30:40 crc kubenswrapper[5023]: I0219 08:30:40.117683 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"23e9a749-d85c-4f75-bb88-5e18bedd8b15","Type":"ContainerStarted","Data":"7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41"} Feb 19 08:30:41 crc kubenswrapper[5023]: I0219 08:30:41.127969 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"23e9a749-d85c-4f75-bb88-5e18bedd8b15","Type":"ContainerStarted","Data":"b6e658322e38bc7c0c7757b25cb1f6b1b44a3aaa0a0629bd0e4185a7c6603b30"} Feb 19 08:30:41 crc kubenswrapper[5023]: I0219 08:30:41.650579 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:42 crc kubenswrapper[5023]: I0219 08:30:42.809110 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:43 crc kubenswrapper[5023]: I0219 08:30:43.999412 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:44 crc kubenswrapper[5023]: W0219 08:30:44.913055 5023 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66f9956_1b42_4f4a_b559_458fca4d2de7.slice/crio-30611173b7d8af57ff0a823b69686d750700816bd1645019f29528886dc0739a": error while statting cgroup v2: [read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66f9956_1b42_4f4a_b559_458fca4d2de7.slice/crio-30611173b7d8af57ff0a823b69686d750700816bd1645019f29528886dc0739a/memory.swap.peak: no such device], continuing to push stats Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.158582 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.171111 5023 generic.go:334] "Generic (PLEG): container finished" podID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerID="7c0dd9551021dd59e5e7b50e22a82d03a8441cf8e7ec0baaae44478cc64ad435" exitCode=137 Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.171177 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerDied","Data":"7c0dd9551021dd59e5e7b50e22a82d03a8441cf8e7ec0baaae44478cc64ad435"} Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.258890 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.282843 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=6.282823337 podStartE2EDuration="6.282823337s" podCreationTimestamp="2026-02-19 08:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:30:41.156096613 +0000 UTC m=+1798.813215561" watchObservedRunningTime="2026-02-19 08:30:45.282823337 +0000 UTC m=+1802.939942285" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387137 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387285 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpr9m\" (UniqueName: \"kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387313 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387336 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387363 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387475 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387518 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.387543 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle\") pod \"f66f9956-1b42-4f4a-b559-458fca4d2de7\" (UID: \"f66f9956-1b42-4f4a-b559-458fca4d2de7\") " Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.388224 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.388596 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs" (OuterVolumeSpecName: "logs") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.395374 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.395686 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts" (OuterVolumeSpecName: "scripts") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.401991 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m" (OuterVolumeSpecName: "kube-api-access-jpr9m") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "kube-api-access-jpr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.443712 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.457180 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data" (OuterVolumeSpecName: "config-data") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.473428 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f66f9956-1b42-4f4a-b559-458fca4d2de7" (UID: "f66f9956-1b42-4f4a-b559-458fca4d2de7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496460 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496500 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpr9m\" (UniqueName: \"kubernetes.io/projected/f66f9956-1b42-4f4a-b559-458fca4d2de7-kube-api-access-jpr9m\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496515 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496525 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496533 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496543 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66f9956-1b42-4f4a-b559-458fca4d2de7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496554 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66f9956-1b42-4f4a-b559-458fca4d2de7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: I0219 08:30:45.496564 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66f9956-1b42-4f4a-b559-458fca4d2de7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:45 crc kubenswrapper[5023]: E0219 08:30:45.671768 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66f9956_1b42_4f4a_b559_458fca4d2de7.slice/crio-30611173b7d8af57ff0a823b69686d750700816bd1645019f29528886dc0739a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66f9956_1b42_4f4a_b559_458fca4d2de7.slice\": RecentStats: unable to find data in memory cache]" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.183756 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"f66f9956-1b42-4f4a-b559-458fca4d2de7","Type":"ContainerDied","Data":"30611173b7d8af57ff0a823b69686d750700816bd1645019f29528886dc0739a"} Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.183827 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.184100 5023 scope.go:117] "RemoveContainer" containerID="7c0dd9551021dd59e5e7b50e22a82d03a8441cf8e7ec0baaae44478cc64ad435" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.204766 5023 scope.go:117] "RemoveContainer" containerID="e7dfa4c2164bc4654e99dc79d539630cb497f219929c8508e0f2ba8d4da5bf96" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.208296 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.216601 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.238984 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:46 crc kubenswrapper[5023]: E0219 08:30:46.239439 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api-log" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.239460 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api-log" Feb 19 08:30:46 crc kubenswrapper[5023]: E0219 08:30:46.239480 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.239489 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.239727 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.239748 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" containerName="cinder-api-log" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.240717 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.243252 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-internal-svc" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.243698 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-public-svc" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.243909 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.247112 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309202 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znv7q\" (UniqueName: \"kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309403 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309526 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309727 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309847 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.309931 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.310027 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.310099 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.310170 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.310278 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.344958 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411411 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411459 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411484 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411523 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411542 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411562 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411584 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411628 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411644 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znv7q\" (UniqueName: \"kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411830 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.411866 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.412058 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.416673 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.416736 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.416930 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.417021 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.417639 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.418437 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.418957 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.427708 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znv7q\" (UniqueName: \"kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q\") pod \"cinder-api-0\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.559230 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:46 crc kubenswrapper[5023]: I0219 08:30:46.871471 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:47 crc kubenswrapper[5023]: I0219 08:30:47.196424 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerStarted","Data":"803c2a96a51f6a06c306e471c0c902340b01c069f58e82e8e63da9cca7ef88e4"} Feb 19 08:30:47 crc kubenswrapper[5023]: I0219 08:30:47.491161 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66f9956-1b42-4f4a-b559-458fca4d2de7" path="/var/lib/kubelet/pods/f66f9956-1b42-4f4a-b559-458fca4d2de7/volumes" Feb 19 08:30:47 crc kubenswrapper[5023]: I0219 08:30:47.604505 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:48 crc kubenswrapper[5023]: I0219 08:30:48.209938 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerStarted","Data":"110c6d59bd66214ad292711773bc42d993abc5746479a52cc81aac188ea0d12a"} Feb 19 08:30:48 crc kubenswrapper[5023]: I0219 08:30:48.210263 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerStarted","Data":"1e5430ca6de0cbff93c710fdfca65352965bc823437e0b84b9d559fee3eb129a"} Feb 19 08:30:48 crc kubenswrapper[5023]: I0219 08:30:48.210445 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:48 crc kubenswrapper[5023]: I0219 08:30:48.232926 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=2.232902953 podStartE2EDuration="2.232902953s" podCreationTimestamp="2026-02-19 08:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:30:48.228768533 +0000 UTC m=+1805.885887481" watchObservedRunningTime="2026-02-19 08:30:48.232902953 +0000 UTC m=+1805.890021901" Feb 19 08:30:48 crc kubenswrapper[5023]: I0219 08:30:48.815111 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:49 crc kubenswrapper[5023]: I0219 08:30:49.477301 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:30:49 crc kubenswrapper[5023]: E0219 08:30:49.477922 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:30:49 crc kubenswrapper[5023]: I0219 08:30:49.488001 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:49 crc kubenswrapper[5023]: I0219 08:30:49.508916 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:50 crc kubenswrapper[5023]: I0219 08:30:50.226124 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:50 crc kubenswrapper[5023]: I0219 08:30:50.255121 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:30:50 crc kubenswrapper[5023]: I0219 08:30:50.418496 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:51 crc kubenswrapper[5023]: I0219 08:30:51.673570 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:51 crc kubenswrapper[5023]: I0219 08:30:51.951863 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.035845 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-7mxjf"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.054208 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-7mxjf"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.072740 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.073100 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="cinder-backup" containerID="cri-o://eb5b94302180aa38d0b09bc233e682ed8f8ce4054c320ba0cf321af329f5cd1f" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.073152 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="probe" containerID="cri-o://6f689d956cc4b5d702fde8897d651b4c4937657ecfa109bb0284883de8e0287c" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.111605 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.111893 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="cinder-scheduler" containerID="cri-o://3e6e6daaace73bee93c2c66b23bca69f4fa927d4910a4b075718a81766441064" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.112285 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="probe" containerID="cri-o://27733719e1d3cadb779ba93ace5f7511df0402a64de51d4138273b80d5832b4e" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.150552 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder7861-account-delete-rhpv6"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.152023 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.163661 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder7861-account-delete-rhpv6"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.171649 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.171891 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api-log" containerID="cri-o://1e5430ca6de0cbff93c710fdfca65352965bc823437e0b84b9d559fee3eb129a" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.172285 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api" containerID="cri-o://110c6d59bd66214ad292711773bc42d993abc5746479a52cc81aac188ea0d12a" gracePeriod=30 Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.217733 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gr8\" (UniqueName: \"kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.217796 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.319427 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7gr8\" (UniqueName: \"kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.319482 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.320225 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.345287 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7gr8\" (UniqueName: \"kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8\") pod \"cinder7861-account-delete-rhpv6\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:52 crc kubenswrapper[5023]: I0219 08:30:52.472009 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.142805 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.265238 5023 generic.go:334] "Generic (PLEG): container finished" podID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerID="27733719e1d3cadb779ba93ace5f7511df0402a64de51d4138273b80d5832b4e" exitCode=0 Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.265310 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerDied","Data":"27733719e1d3cadb779ba93ace5f7511df0402a64de51d4138273b80d5832b4e"} Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.267339 5023 generic.go:334] "Generic (PLEG): container finished" podID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerID="6f689d956cc4b5d702fde8897d651b4c4937657ecfa109bb0284883de8e0287c" exitCode=0 Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.267384 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerDied","Data":"6f689d956cc4b5d702fde8897d651b4c4937657ecfa109bb0284883de8e0287c"} Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.269346 5023 generic.go:334] "Generic (PLEG): container finished" podID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerID="110c6d59bd66214ad292711773bc42d993abc5746479a52cc81aac188ea0d12a" exitCode=0 Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.269369 5023 generic.go:334] "Generic (PLEG): container finished" podID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerID="1e5430ca6de0cbff93c710fdfca65352965bc823437e0b84b9d559fee3eb129a" exitCode=143 Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.269388 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerDied","Data":"110c6d59bd66214ad292711773bc42d993abc5746479a52cc81aac188ea0d12a"} Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.269405 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerDied","Data":"1e5430ca6de0cbff93c710fdfca65352965bc823437e0b84b9d559fee3eb129a"} Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.490759 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f0281e-378e-4f3d-bfa4-3d8ac1ec026e" path="/var/lib/kubelet/pods/47f0281e-378e-4f3d-bfa4-3d8ac1ec026e/volumes" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.655740 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.660409 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder7861-account-delete-rhpv6"] Feb 19 08:30:53 crc kubenswrapper[5023]: W0219 08:30:53.668444 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad260530_13f2_43b3_a74f_8f175d553eba.slice/crio-0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1 WatchSource:0}: Error finding container 0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1: Status 404 returned error can't find the container with id 0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1 Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753522 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753572 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znv7q\" (UniqueName: \"kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753600 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753639 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753678 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753764 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753792 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753835 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753894 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.753937 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data\") pod \"2c60c070-60ac-4bf8-a218-5a68e98284bb\" (UID: \"2c60c070-60ac-4bf8-a218-5a68e98284bb\") " Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.756243 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.756550 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs" (OuterVolumeSpecName: "logs") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.761064 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.761092 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts" (OuterVolumeSpecName: "scripts") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.761121 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q" (OuterVolumeSpecName: "kube-api-access-znv7q") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "kube-api-access-znv7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.778278 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.802345 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.804819 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data" (OuterVolumeSpecName: "config-data") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.806332 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.842590 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2c60c070-60ac-4bf8-a218-5a68e98284bb" (UID: "2c60c070-60ac-4bf8-a218-5a68e98284bb"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861898 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c60c070-60ac-4bf8-a218-5a68e98284bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861942 5023 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861956 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861968 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861980 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znv7q\" (UniqueName: \"kubernetes.io/projected/2c60c070-60ac-4bf8-a218-5a68e98284bb-kube-api-access-znv7q\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.861993 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.862004 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.862015 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.862026 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c60c070-60ac-4bf8-a218-5a68e98284bb-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.862035 5023 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c60c070-60ac-4bf8-a218-5a68e98284bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.916556 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:30:53 crc kubenswrapper[5023]: I0219 08:30:53.916833 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" containerName="watcher-decision-engine" containerID="cri-o://b6e658322e38bc7c0c7757b25cb1f6b1b44a3aaa0a0629bd0e4185a7c6603b30" gracePeriod=30 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.281092 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"2c60c070-60ac-4bf8-a218-5a68e98284bb","Type":"ContainerDied","Data":"803c2a96a51f6a06c306e471c0c902340b01c069f58e82e8e63da9cca7ef88e4"} Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.281145 5023 scope.go:117] "RemoveContainer" containerID="110c6d59bd66214ad292711773bc42d993abc5746479a52cc81aac188ea0d12a" Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.281261 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.286844 5023 generic.go:334] "Generic (PLEG): container finished" podID="ad260530-13f2-43b3-a74f-8f175d553eba" containerID="f0db34e6d4a695f8c4a79f9cf7291ab9b5099efe62c91c5419e50811df6e4cff" exitCode=0 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.286898 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" event={"ID":"ad260530-13f2-43b3-a74f-8f175d553eba","Type":"ContainerDied","Data":"f0db34e6d4a695f8c4a79f9cf7291ab9b5099efe62c91c5419e50811df6e4cff"} Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.286933 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" event={"ID":"ad260530-13f2-43b3-a74f-8f175d553eba","Type":"ContainerStarted","Data":"0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1"} Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.307438 5023 scope.go:117] "RemoveContainer" containerID="1e5430ca6de0cbff93c710fdfca65352965bc823437e0b84b9d559fee3eb129a" Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.324705 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.331036 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.377872 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.551249 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.551713 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-central-agent" containerID="cri-o://db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" gracePeriod=30 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.551850 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="proxy-httpd" containerID="cri-o://ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" gracePeriod=30 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.551895 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="sg-core" containerID="cri-o://02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" gracePeriod=30 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.551934 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-notification-agent" containerID="cri-o://9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" gracePeriod=30 Feb 19 08:30:54 crc kubenswrapper[5023]: I0219 08:30:54.564482 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.237:3000/\": EOF" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.284410 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298174 5023 generic.go:334] "Generic (PLEG): container finished" podID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" exitCode=0 Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298214 5023 generic.go:334] "Generic (PLEG): container finished" podID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" exitCode=2 Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298224 5023 generic.go:334] "Generic (PLEG): container finished" podID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" exitCode=0 Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298236 5023 generic.go:334] "Generic (PLEG): container finished" podID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" exitCode=0 Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298281 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerDied","Data":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298314 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerDied","Data":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298330 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerDied","Data":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298342 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerDied","Data":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298354 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"55b35c67-6038-4f17-ba29-b72d6b4e5ee0","Type":"ContainerDied","Data":"98230f8bb4fb416be8e9094165946b4f71011372cfda79967e472ade416ac873"} Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298374 5023 scope.go:117] "RemoveContainer" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.298514 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.337994 5023 scope.go:117] "RemoveContainer" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382515 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382682 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382717 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382754 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382839 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382879 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5k4\" (UniqueName: \"kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382909 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.382970 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml\") pod \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\" (UID: \"55b35c67-6038-4f17-ba29-b72d6b4e5ee0\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.384380 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.384673 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.390737 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4" (OuterVolumeSpecName: "kube-api-access-9d5k4") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "kube-api-access-9d5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.395975 5023 scope.go:117] "RemoveContainer" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.410411 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.412534 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts" (OuterVolumeSpecName: "scripts") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.452225 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.477002 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484671 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484707 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484721 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484734 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484745 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d5k4\" (UniqueName: \"kubernetes.io/projected/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-kube-api-access-9d5k4\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484757 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.484769 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.516934 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" path="/var/lib/kubelet/pods/2c60c070-60ac-4bf8-a218-5a68e98284bb/volumes" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.534041 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data" (OuterVolumeSpecName: "config-data") pod "55b35c67-6038-4f17-ba29-b72d6b4e5ee0" (UID: "55b35c67-6038-4f17-ba29-b72d6b4e5ee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.538459 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.557973 5023 scope.go:117] "RemoveContainer" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.575524 5023 scope.go:117] "RemoveContainer" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.575929 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": container with ID starting with ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d not found: ID does not exist" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.575961 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} err="failed to get container status \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": rpc error: code = NotFound desc = could not find container \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": container with ID starting with ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.575985 5023 scope.go:117] "RemoveContainer" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.576301 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": container with ID starting with 02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964 not found: ID does not exist" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576322 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} err="failed to get container status \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": rpc error: code = NotFound desc = could not find container \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": container with ID starting with 02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576375 5023 scope.go:117] "RemoveContainer" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.576575 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": container with ID starting with 9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449 not found: ID does not exist" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576603 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} err="failed to get container status \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": rpc error: code = NotFound desc = could not find container \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": container with ID starting with 9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576632 5023 scope.go:117] "RemoveContainer" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.576841 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": container with ID starting with db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3 not found: ID does not exist" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576855 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} err="failed to get container status \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": rpc error: code = NotFound desc = could not find container \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": container with ID starting with db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.576867 5023 scope.go:117] "RemoveContainer" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.577167 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} err="failed to get container status \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": rpc error: code = NotFound desc = could not find container \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": container with ID starting with ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.577262 5023 scope.go:117] "RemoveContainer" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.577869 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} err="failed to get container status \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": rpc error: code = NotFound desc = could not find container \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": container with ID starting with 02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.577923 5023 scope.go:117] "RemoveContainer" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578211 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} err="failed to get container status \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": rpc error: code = NotFound desc = could not find container \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": container with ID starting with 9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578231 5023 scope.go:117] "RemoveContainer" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578498 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} err="failed to get container status \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": rpc error: code = NotFound desc = could not find container \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": container with ID starting with db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578521 5023 scope.go:117] "RemoveContainer" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578795 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} err="failed to get container status \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": rpc error: code = NotFound desc = could not find container \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": container with ID starting with ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.578813 5023 scope.go:117] "RemoveContainer" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579099 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} err="failed to get container status \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": rpc error: code = NotFound desc = could not find container \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": container with ID starting with 02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579117 5023 scope.go:117] "RemoveContainer" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579433 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} err="failed to get container status \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": rpc error: code = NotFound desc = could not find container \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": container with ID starting with 9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579450 5023 scope.go:117] "RemoveContainer" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579703 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} err="failed to get container status \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": rpc error: code = NotFound desc = could not find container \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": container with ID starting with db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.579718 5023 scope.go:117] "RemoveContainer" containerID="ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.580129 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d"} err="failed to get container status \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": rpc error: code = NotFound desc = could not find container \"ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d\": container with ID starting with ef61e19c887c91ebf2dea4f71bdb8848569528851678e05d9e21e30c28595e7d not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.580230 5023 scope.go:117] "RemoveContainer" containerID="02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.580564 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964"} err="failed to get container status \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": rpc error: code = NotFound desc = could not find container \"02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964\": container with ID starting with 02625d53bd7baf1d545c19742092769dbb93f10b46543bd97e1219829a9b0964 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.580665 5023 scope.go:117] "RemoveContainer" containerID="9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.581086 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449"} err="failed to get container status \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": rpc error: code = NotFound desc = could not find container \"9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449\": container with ID starting with 9bf0bf21612f3bd219bf0a84625cfb169938d648eec45dacd83874a3db7ce449 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.581171 5023 scope.go:117] "RemoveContainer" containerID="db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.581580 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3"} err="failed to get container status \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": rpc error: code = NotFound desc = could not find container \"db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3\": container with ID starting with db0db909edad75f1e9d6fec8dacf702bac6c5c1727cef0ce6e13968847f7a7e3 not found: ID does not exist" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.588642 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b35c67-6038-4f17-ba29-b72d6b4e5ee0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.615025 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.667940 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.675299 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.683744 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.685291 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="proxy-httpd" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.685406 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="proxy-httpd" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.685562 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.685662 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.685748 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-notification-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.685822 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-notification-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.685897 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="sg-core" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.685970 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="sg-core" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.686057 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api-log" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686135 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api-log" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.686215 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-central-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686287 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-central-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: E0219 08:30:55.686356 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad260530-13f2-43b3-a74f-8f175d553eba" containerName="mariadb-account-delete" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686416 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad260530-13f2-43b3-a74f-8f175d553eba" containerName="mariadb-account-delete" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686699 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="sg-core" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686811 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad260530-13f2-43b3-a74f-8f175d553eba" containerName="mariadb-account-delete" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686898 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-notification-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.686978 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="proxy-httpd" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.687051 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api-log" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.687100 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c60c070-60ac-4bf8-a218-5a68e98284bb" containerName="cinder-api" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.687163 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" containerName="ceilometer-central-agent" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.689458 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.692099 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.695916 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.696209 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.704004 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.792111 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7gr8\" (UniqueName: \"kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8\") pod \"ad260530-13f2-43b3-a74f-8f175d553eba\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.792380 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts\") pod \"ad260530-13f2-43b3-a74f-8f175d553eba\" (UID: \"ad260530-13f2-43b3-a74f-8f175d553eba\") " Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.792980 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793033 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad260530-13f2-43b3-a74f-8f175d553eba" (UID: "ad260530-13f2-43b3-a74f-8f175d553eba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793075 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793149 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793250 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793386 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793465 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdwq\" (UniqueName: \"kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793503 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793562 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.793880 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad260530-13f2-43b3-a74f-8f175d553eba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.795625 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8" (OuterVolumeSpecName: "kube-api-access-g7gr8") pod "ad260530-13f2-43b3-a74f-8f175d553eba" (UID: "ad260530-13f2-43b3-a74f-8f175d553eba"). InnerVolumeSpecName "kube-api-access-g7gr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.894955 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895019 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895083 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895127 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdwq\" (UniqueName: \"kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895155 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895180 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895217 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895253 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895306 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7gr8\" (UniqueName: \"kubernetes.io/projected/ad260530-13f2-43b3-a74f-8f175d553eba-kube-api-access-g7gr8\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.895808 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.896344 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.898690 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.898810 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.898876 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.900177 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.900212 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:55 crc kubenswrapper[5023]: I0219 08:30:55.920609 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdwq\" (UniqueName: \"kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq\") pod \"ceilometer-0\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.076422 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.316867 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" event={"ID":"ad260530-13f2-43b3-a74f-8f175d553eba","Type":"ContainerDied","Data":"0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1"} Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.316911 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef7fec918266c25344f5e1d7e4343868fe62d609475b95c794ceaa0c6a665a1" Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.316978 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7861-account-delete-rhpv6" Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.622341 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:30:56 crc kubenswrapper[5023]: I0219 08:30:56.773209 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.146127 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-4hlnq"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.153542 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-4hlnq"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.175242 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder7861-account-delete-rhpv6"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.183809 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder7861-account-delete-rhpv6"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.203772 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-7861-account-create-update-mkrjf"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.210421 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-7861-account-create-update-mkrjf"] Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.387041 5023 generic.go:334] "Generic (PLEG): container finished" podID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerID="eb5b94302180aa38d0b09bc233e682ed8f8ce4054c320ba0cf321af329f5cd1f" exitCode=0 Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.387121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerDied","Data":"eb5b94302180aa38d0b09bc233e682ed8f8ce4054c320ba0cf321af329f5cd1f"} Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.388883 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerStarted","Data":"52a3e9a5c025ab504d072a7c0e1e6c2d847c91e28309cd9b473f0db3f660ec1b"} Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.388926 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerStarted","Data":"a3ad05b2b2fe6c044a3bf3aa029ed7c2717acc76421f7e1c830bc5a9773237aa"} Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.390673 5023 generic.go:334] "Generic (PLEG): container finished" podID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerID="3e6e6daaace73bee93c2c66b23bca69f4fa927d4910a4b075718a81766441064" exitCode=0 Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.390702 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerDied","Data":"3e6e6daaace73bee93c2c66b23bca69f4fa927d4910a4b075718a81766441064"} Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.421788 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.513834 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b35c67-6038-4f17-ba29-b72d6b4e5ee0" path="/var/lib/kubelet/pods/55b35c67-6038-4f17-ba29-b72d6b4e5ee0/volumes" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.514969 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c8e32c-e771-4c02-bb99-51acdc7a231f" path="/var/lib/kubelet/pods/73c8e32c-e771-4c02-bb99-51acdc7a231f/volumes" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.516252 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925dbd8c-6e2e-40fd-84d3-e61de27c7ad9" path="/var/lib/kubelet/pods/925dbd8c-6e2e-40fd-84d3-e61de27c7ad9/volumes" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.519344 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad260530-13f2-43b3-a74f-8f175d553eba" path="/var/lib/kubelet/pods/ad260530-13f2-43b3-a74f-8f175d553eba/volumes" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.538838 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.538906 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.538934 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.538991 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539036 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539064 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539096 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539128 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539160 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539201 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krvfq\" (UniqueName: \"kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539271 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539359 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539383 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539453 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539528 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.539573 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom\") pod \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\" (UID: \"7c0f1466-ab85-4e39-a327-45d9e00e8e8e\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.541630 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.542036 5023 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.542095 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run" (OuterVolumeSpecName: "run") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.542125 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.542145 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.547668 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.547717 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.547743 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.547766 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.547793 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys" (OuterVolumeSpecName: "sys") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.549227 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.549308 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev" (OuterVolumeSpecName: "dev") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.549572 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts" (OuterVolumeSpecName: "scripts") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.551802 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq" (OuterVolumeSpecName: "kube-api-access-krvfq") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "kube-api-access-krvfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.597403 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644569 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krvfq\" (UniqueName: \"kubernetes.io/projected/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-kube-api-access-krvfq\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644600 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644609 5023 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644638 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644650 5023 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644658 5023 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644667 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644676 5023 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644684 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644694 5023 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-run\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644701 5023 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-sys\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644709 5023 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-dev\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.644721 5023 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.648742 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data" (OuterVolumeSpecName: "config-data") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.692271 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7c0f1466-ab85-4e39-a327-45d9e00e8e8e" (UID: "7c0f1466-ab85-4e39-a327-45d9e00e8e8e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.710127 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.749731 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.749769 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0f1466-ab85-4e39-a327-45d9e00e8e8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851570 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851646 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851772 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851852 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851944 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.851987 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.852019 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82g9v\" (UniqueName: \"kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v\") pod \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\" (UID: \"3ef60639-6272-46c7-8fde-15ce9d7e7ded\") " Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.855601 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.857568 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts" (OuterVolumeSpecName: "scripts") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.859447 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.867759 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v" (OuterVolumeSpecName: "kube-api-access-82g9v") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "kube-api-access-82g9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.923391 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.947307 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.954788 5023 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.954823 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82g9v\" (UniqueName: \"kubernetes.io/projected/3ef60639-6272-46c7-8fde-15ce9d7e7ded-kube-api-access-82g9v\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.954835 5023 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef60639-6272-46c7-8fde-15ce9d7e7ded-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.954845 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.954854 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:57 crc kubenswrapper[5023]: I0219 08:30:57.965323 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data" (OuterVolumeSpecName: "config-data") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.015560 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3ef60639-6272-46c7-8fde-15ce9d7e7ded" (UID: "3ef60639-6272-46c7-8fde-15ce9d7e7ded"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.056606 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.056678 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef60639-6272-46c7-8fde-15ce9d7e7ded-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.405507 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerStarted","Data":"4b22dea68bda971bc10bc2976b496ca78b551e07e4d0f9411683c959b86f1d9b"} Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.407962 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"3ef60639-6272-46c7-8fde-15ce9d7e7ded","Type":"ContainerDied","Data":"6031188faee7307855a7c8800499901154b8569ee4b782afe0dffa3153437d31"} Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.408043 5023 scope.go:117] "RemoveContainer" containerID="27733719e1d3cadb779ba93ace5f7511df0402a64de51d4138273b80d5832b4e" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.408187 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.415525 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"7c0f1466-ab85-4e39-a327-45d9e00e8e8e","Type":"ContainerDied","Data":"9fa344343f55882f98850fa8b20e26b9e80d07358aecc96ecdf268ab7b86771c"} Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.415611 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.469500 5023 scope.go:117] "RemoveContainer" containerID="3e6e6daaace73bee93c2c66b23bca69f4fa927d4910a4b075718a81766441064" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.505804 5023 scope.go:117] "RemoveContainer" containerID="6f689d956cc4b5d702fde8897d651b4c4937657ecfa109bb0284883de8e0287c" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.508636 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.530042 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.532408 5023 scope.go:117] "RemoveContainer" containerID="eb5b94302180aa38d0b09bc233e682ed8f8ce4054c320ba0cf321af329f5cd1f" Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.540429 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:58 crc kubenswrapper[5023]: I0219 08:30:58.548248 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 19 08:30:59 crc kubenswrapper[5023]: I0219 08:30:59.128941 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:30:59 crc kubenswrapper[5023]: I0219 08:30:59.433049 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerStarted","Data":"5318d543a1d4c77970c185ce94c4468b7a85ced1281bb2d867db8e7a2fb41c16"} Feb 19 08:30:59 crc kubenswrapper[5023]: I0219 08:30:59.489684 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" path="/var/lib/kubelet/pods/3ef60639-6272-46c7-8fde-15ce9d7e7ded/volumes" Feb 19 08:30:59 crc kubenswrapper[5023]: I0219 08:30:59.490351 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" path="/var/lib/kubelet/pods/7c0f1466-ab85-4e39-a327-45d9e00e8e8e/volumes" Feb 19 08:31:00 crc kubenswrapper[5023]: I0219 08:31:00.314811 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:31:00 crc kubenswrapper[5023]: I0219 08:31:00.442034 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerStarted","Data":"1c3cd2bae039e498cd95c70570e1d051f37b9fea80eae30052e6b45a2872ed86"} Feb 19 08:31:00 crc kubenswrapper[5023]: I0219 08:31:00.443228 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:00 crc kubenswrapper[5023]: I0219 08:31:00.468192 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.110904923 podStartE2EDuration="5.468168896s" podCreationTimestamp="2026-02-19 08:30:55 +0000 UTC" firstStartedPulling="2026-02-19 08:30:56.633180028 +0000 UTC m=+1814.290298976" lastFinishedPulling="2026-02-19 08:30:59.990443981 +0000 UTC m=+1817.647562949" observedRunningTime="2026-02-19 08:31:00.46078408 +0000 UTC m=+1818.117903048" watchObservedRunningTime="2026-02-19 08:31:00.468168896 +0000 UTC m=+1818.125287844" Feb 19 08:31:01 crc kubenswrapper[5023]: I0219 08:31:01.480492 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.464432 5023 generic.go:334] "Generic (PLEG): container finished" podID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" containerID="b6e658322e38bc7c0c7757b25cb1f6b1b44a3aaa0a0629bd0e4185a7c6603b30" exitCode=0 Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.465595 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"23e9a749-d85c-4f75-bb88-5e18bedd8b15","Type":"ContainerDied","Data":"b6e658322e38bc7c0c7757b25cb1f6b1b44a3aaa0a0629bd0e4185a7c6603b30"} Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.465667 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"23e9a749-d85c-4f75-bb88-5e18bedd8b15","Type":"ContainerDied","Data":"7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41"} Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.465680 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4911f5cfc982322d0b84b74575e3418abd46bf2fcbb46bac3e8779eb360c41" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.477405 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:31:02 crc kubenswrapper[5023]: E0219 08:31:02.477882 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.517099 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658263 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658325 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvd2c\" (UniqueName: \"kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658377 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658435 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658496 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.658524 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs\") pod \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\" (UID: \"23e9a749-d85c-4f75-bb88-5e18bedd8b15\") " Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.659209 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs" (OuterVolumeSpecName: "logs") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.669843 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c" (OuterVolumeSpecName: "kube-api-access-wvd2c") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "kube-api-access-wvd2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.690304 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.701656 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.719212 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_23e9a749-d85c-4f75-bb88-5e18bedd8b15/watcher-decision-engine/0.log" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.729859 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data" (OuterVolumeSpecName: "config-data") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.744227 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "23e9a749-d85c-4f75-bb88-5e18bedd8b15" (UID: "23e9a749-d85c-4f75-bb88-5e18bedd8b15"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762022 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762056 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvd2c\" (UniqueName: \"kubernetes.io/projected/23e9a749-d85c-4f75-bb88-5e18bedd8b15-kube-api-access-wvd2c\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762067 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762076 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762085 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23e9a749-d85c-4f75-bb88-5e18bedd8b15-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:02 crc kubenswrapper[5023]: I0219 08:31:02.762096 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9a749-d85c-4f75-bb88-5e18bedd8b15-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.474263 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.515139 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.523901 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.542937 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:03 crc kubenswrapper[5023]: E0219 08:31:03.543408 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="cinder-scheduler" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543431 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="cinder-scheduler" Feb 19 08:31:03 crc kubenswrapper[5023]: E0219 08:31:03.543447 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" containerName="watcher-decision-engine" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543457 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" containerName="watcher-decision-engine" Feb 19 08:31:03 crc kubenswrapper[5023]: E0219 08:31:03.543475 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543483 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: E0219 08:31:03.543501 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543509 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: E0219 08:31:03.543522 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="cinder-backup" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543531 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="cinder-backup" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543744 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543761 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="cinder-scheduler" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543774 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef60639-6272-46c7-8fde-15ce9d7e7ded" containerName="probe" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543793 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" containerName="watcher-decision-engine" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.543807 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0f1466-ab85-4e39-a327-45d9e00e8e8e" containerName="cinder-backup" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.544532 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.547337 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.551799 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.673960 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.674045 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rl62\" (UniqueName: \"kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.674086 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.674601 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.674799 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.674867 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776273 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rl62\" (UniqueName: \"kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776340 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776386 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776409 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776453 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.776545 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.777400 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.780768 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.787117 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.788413 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.789501 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.792077 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rl62\" (UniqueName: \"kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:03 crc kubenswrapper[5023]: I0219 08:31:03.871195 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:04 crc kubenswrapper[5023]: I0219 08:31:04.339512 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:04 crc kubenswrapper[5023]: I0219 08:31:04.484931 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"30d03346-49dd-43dd-a883-a970c0fe57a4","Type":"ContainerStarted","Data":"78bf527499da443b25c396067228ca4ba1a58c9a8fe9712e4e07b81e09c9fc35"} Feb 19 08:31:05 crc kubenswrapper[5023]: I0219 08:31:05.485736 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e9a749-d85c-4f75-bb88-5e18bedd8b15" path="/var/lib/kubelet/pods/23e9a749-d85c-4f75-bb88-5e18bedd8b15/volumes" Feb 19 08:31:05 crc kubenswrapper[5023]: I0219 08:31:05.494150 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"30d03346-49dd-43dd-a883-a970c0fe57a4","Type":"ContainerStarted","Data":"71a2e3823fcffbd8dde12c8deb6938d26fa0d5c1ebfb5093a30f62fe215cb461"} Feb 19 08:31:05 crc kubenswrapper[5023]: I0219 08:31:05.513098 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.513073786 podStartE2EDuration="2.513073786s" podCreationTimestamp="2026-02-19 08:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:05.509913852 +0000 UTC m=+1823.167032790" watchObservedRunningTime="2026-02-19 08:31:05.513073786 +0000 UTC m=+1823.170192734" Feb 19 08:31:06 crc kubenswrapper[5023]: I0219 08:31:06.186422 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:07 crc kubenswrapper[5023]: I0219 08:31:07.432251 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:08 crc kubenswrapper[5023]: I0219 08:31:08.675488 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:09 crc kubenswrapper[5023]: I0219 08:31:09.862886 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:11 crc kubenswrapper[5023]: I0219 08:31:11.068928 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:12 crc kubenswrapper[5023]: I0219 08:31:12.259014 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:13 crc kubenswrapper[5023]: I0219 08:31:13.434554 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:13 crc kubenswrapper[5023]: I0219 08:31:13.872396 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:13 crc kubenswrapper[5023]: I0219 08:31:13.897986 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:14 crc kubenswrapper[5023]: I0219 08:31:14.477466 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:31:14 crc kubenswrapper[5023]: E0219 08:31:14.478412 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:31:14 crc kubenswrapper[5023]: I0219 08:31:14.572671 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:14 crc kubenswrapper[5023]: I0219 08:31:14.603577 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:14 crc kubenswrapper[5023]: I0219 08:31:14.624508 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:15 crc kubenswrapper[5023]: I0219 08:31:15.823384 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_30d03346-49dd-43dd-a883-a970c0fe57a4/watcher-decision-engine/0.log" Feb 19 08:31:15 crc kubenswrapper[5023]: I0219 08:31:15.979393 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb"] Feb 19 08:31:15 crc kubenswrapper[5023]: I0219 08:31:15.987169 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qqnxb"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.036652 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher2a51-account-delete-58w6t"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.037727 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.045229 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2a51-account-delete-58w6t"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.122387 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.122649 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerName="watcher-applier" containerID="cri-o://d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" gracePeriod=30 Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.157108 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.182278 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.182540 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-kuttl-api-log" containerID="cri-o://22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d" gracePeriod=30 Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.182702 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-api" containerID="cri-o://8ce04b38b8007f31e3f7e088e4de0e10ebcddad780366545e0429dbbd5a5ef4f" gracePeriod=30 Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.201076 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpw9\" (UniqueName: \"kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.201173 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.303089 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpw9\" (UniqueName: \"kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.303179 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.304352 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.353262 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpw9\" (UniqueName: \"kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9\") pod \"watcher2a51-account-delete-58w6t\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: E0219 08:31:16.357919 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda29d9751_3f3b_4b2e_a1a5_cbe7b3bbac07.slice/crio-conmon-22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.406384 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.616043 5023 generic.go:334] "Generic (PLEG): container finished" podID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerID="22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d" exitCode=143 Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.616131 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerDied","Data":"22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d"} Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.616585 5023 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-g7ghg\" not found" Feb 19 08:31:16 crc kubenswrapper[5023]: E0219 08:31:16.715271 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:16 crc kubenswrapper[5023]: E0219 08:31:16.715362 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data podName:30d03346-49dd-43dd-a883-a970c0fe57a4 nodeName:}" failed. No retries permitted until 2026-02-19 08:31:17.215341025 +0000 UTC m=+1834.872459973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:16 crc kubenswrapper[5023]: I0219 08:31:16.949744 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2a51-account-delete-58w6t"] Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.224805 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.225113 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data podName:30d03346-49dd-43dd-a883-a970c0fe57a4 nodeName:}" failed. No retries permitted until 2026-02-19 08:31:18.225097882 +0000 UTC m=+1835.882216830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.504172 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f80b860-c7ce-4a16-a516-3d3ec01cc8fe" path="/var/lib/kubelet/pods/7f80b860-c7ce-4a16-a516-3d3ec01cc8fe/volumes" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.663536 5023 generic.go:334] "Generic (PLEG): container finished" podID="b24f54b4-1f4c-4c16-818e-8ce415d1216d" containerID="921c3b2965e5260b2eb1ab96bd0d96cecd971fa7698627ba55bcaffb911cae71" exitCode=0 Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.663691 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" event={"ID":"b24f54b4-1f4c-4c16-818e-8ce415d1216d","Type":"ContainerDied","Data":"921c3b2965e5260b2eb1ab96bd0d96cecd971fa7698627ba55bcaffb911cae71"} Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.663866 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" event={"ID":"b24f54b4-1f4c-4c16-818e-8ce415d1216d","Type":"ContainerStarted","Data":"78664c83d86c83ab8d9f5efdd11df038343a9878af0159321a9bf4059b20a295"} Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.683501 5023 generic.go:334] "Generic (PLEG): container finished" podID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerID="8ce04b38b8007f31e3f7e088e4de0e10ebcddad780366545e0429dbbd5a5ef4f" exitCode=0 Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.683599 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerDied","Data":"8ce04b38b8007f31e3f7e088e4de0e10ebcddad780366545e0429dbbd5a5ef4f"} Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.683947 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="30d03346-49dd-43dd-a883-a970c0fe57a4" containerName="watcher-decision-engine" containerID="cri-o://71a2e3823fcffbd8dde12c8deb6938d26fa0d5c1ebfb5093a30f62fe215cb461" gracePeriod=30 Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.808671 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.810087 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.811792 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:31:17 crc kubenswrapper[5023]: E0219 08:31:17.811837 5023 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerName="watcher-applier" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.867519 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941327 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941383 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941425 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f46f\" (UniqueName: \"kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941457 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941533 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.941602 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca\") pod \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\" (UID: \"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07\") " Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.942498 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs" (OuterVolumeSpecName: "logs") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.960856 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f" (OuterVolumeSpecName: "kube-api-access-7f46f") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "kube-api-access-7f46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.969746 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:17 crc kubenswrapper[5023]: I0219 08:31:17.972565 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.008804 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.030756 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data" (OuterVolumeSpecName: "config-data") pod "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" (UID: "a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044367 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044408 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044420 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044432 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f46f\" (UniqueName: \"kubernetes.io/projected/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-kube-api-access-7f46f\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044444 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.044454 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:18 crc kubenswrapper[5023]: E0219 08:31:18.247930 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:18 crc kubenswrapper[5023]: E0219 08:31:18.248017 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data podName:30d03346-49dd-43dd-a883-a970c0fe57a4 nodeName:}" failed. No retries permitted until 2026-02-19 08:31:20.248000147 +0000 UTC m=+1837.905119095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.695038 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.695022 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07","Type":"ContainerDied","Data":"24f8b81f765210c6c6ab02a665f45be10119b00d2487be9485a02d876ca56b57"} Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.695565 5023 scope.go:117] "RemoveContainer" containerID="8ce04b38b8007f31e3f7e088e4de0e10ebcddad780366545e0429dbbd5a5ef4f" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.720694 5023 scope.go:117] "RemoveContainer" containerID="22dce1044d16de79226892598b6335b90f58137c95a378a6fe28d7f54c25161d" Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.734817 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.741214 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.861910 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.862277 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="sg-core" containerID="cri-o://5318d543a1d4c77970c185ce94c4468b7a85ced1281bb2d867db8e7a2fb41c16" gracePeriod=30 Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.862277 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-notification-agent" containerID="cri-o://4b22dea68bda971bc10bc2976b496ca78b551e07e4d0f9411683c959b86f1d9b" gracePeriod=30 Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.862316 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="proxy-httpd" containerID="cri-o://1c3cd2bae039e498cd95c70570e1d051f37b9fea80eae30052e6b45a2872ed86" gracePeriod=30 Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.863705 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-central-agent" containerID="cri-o://52a3e9a5c025ab504d072a7c0e1e6c2d847c91e28309cd9b473f0db3f660ec1b" gracePeriod=30 Feb 19 08:31:18 crc kubenswrapper[5023]: I0219 08:31:18.902303 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.067384 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.161810 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts\") pod \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.161974 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtpw9\" (UniqueName: \"kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9\") pod \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\" (UID: \"b24f54b4-1f4c-4c16-818e-8ce415d1216d\") " Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.162606 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b24f54b4-1f4c-4c16-818e-8ce415d1216d" (UID: "b24f54b4-1f4c-4c16-818e-8ce415d1216d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.166169 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9" (OuterVolumeSpecName: "kube-api-access-jtpw9") pod "b24f54b4-1f4c-4c16-818e-8ce415d1216d" (UID: "b24f54b4-1f4c-4c16-818e-8ce415d1216d"). InnerVolumeSpecName "kube-api-access-jtpw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.263456 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24f54b4-1f4c-4c16-818e-8ce415d1216d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.263497 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtpw9\" (UniqueName: \"kubernetes.io/projected/b24f54b4-1f4c-4c16-818e-8ce415d1216d-kube-api-access-jtpw9\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.487787 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" path="/var/lib/kubelet/pods/a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07/volumes" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706264 5023 generic.go:334] "Generic (PLEG): container finished" podID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerID="1c3cd2bae039e498cd95c70570e1d051f37b9fea80eae30052e6b45a2872ed86" exitCode=0 Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706297 5023 generic.go:334] "Generic (PLEG): container finished" podID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerID="5318d543a1d4c77970c185ce94c4468b7a85ced1281bb2d867db8e7a2fb41c16" exitCode=2 Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706305 5023 generic.go:334] "Generic (PLEG): container finished" podID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerID="52a3e9a5c025ab504d072a7c0e1e6c2d847c91e28309cd9b473f0db3f660ec1b" exitCode=0 Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706339 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerDied","Data":"1c3cd2bae039e498cd95c70570e1d051f37b9fea80eae30052e6b45a2872ed86"} Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706381 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerDied","Data":"5318d543a1d4c77970c185ce94c4468b7a85ced1281bb2d867db8e7a2fb41c16"} Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.706394 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerDied","Data":"52a3e9a5c025ab504d072a7c0e1e6c2d847c91e28309cd9b473f0db3f660ec1b"} Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.707993 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" event={"ID":"b24f54b4-1f4c-4c16-818e-8ce415d1216d","Type":"ContainerDied","Data":"78664c83d86c83ab8d9f5efdd11df038343a9878af0159321a9bf4059b20a295"} Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.708052 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78664c83d86c83ab8d9f5efdd11df038343a9878af0159321a9bf4059b20a295" Feb 19 08:31:19 crc kubenswrapper[5023]: I0219 08:31:19.708022 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2a51-account-delete-58w6t" Feb 19 08:31:20 crc kubenswrapper[5023]: E0219 08:31:20.279100 5023 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:20 crc kubenswrapper[5023]: E0219 08:31:20.279176 5023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data podName:30d03346-49dd-43dd-a883-a970c0fe57a4 nodeName:}" failed. No retries permitted until 2026-02-19 08:31:24.279161516 +0000 UTC m=+1841.936280464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.085272 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mwlxs"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.093760 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-mwlxs"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.099753 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.108551 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher2a51-account-delete-58w6t"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.114719 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-2a51-account-create-update-bbdz7"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.120072 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher2a51-account-delete-58w6t"] Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.487487 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b56214-628d-4025-b897-877f5cc251a0" path="/var/lib/kubelet/pods/82b56214-628d-4025-b897-877f5cc251a0/volumes" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.488378 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24f54b4-1f4c-4c16-818e-8ce415d1216d" path="/var/lib/kubelet/pods/b24f54b4-1f4c-4c16-818e-8ce415d1216d/volumes" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.489176 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd40cf0-df29-4446-89f5-06fc184f01d0" path="/var/lib/kubelet/pods/fcd40cf0-df29-4446-89f5-06fc184f01d0/volumes" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.723944 5023 generic.go:334] "Generic (PLEG): container finished" podID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerID="d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" exitCode=0 Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.724003 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a27efcc0-c658-4771-8c7c-ab39b0318d81","Type":"ContainerDied","Data":"d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083"} Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.725695 5023 generic.go:334] "Generic (PLEG): container finished" podID="30d03346-49dd-43dd-a883-a970c0fe57a4" containerID="71a2e3823fcffbd8dde12c8deb6938d26fa0d5c1ebfb5093a30f62fe215cb461" exitCode=0 Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.725740 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"30d03346-49dd-43dd-a883-a970c0fe57a4","Type":"ContainerDied","Data":"71a2e3823fcffbd8dde12c8deb6938d26fa0d5c1ebfb5093a30f62fe215cb461"} Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.725762 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"30d03346-49dd-43dd-a883-a970c0fe57a4","Type":"ContainerDied","Data":"78bf527499da443b25c396067228ca4ba1a58c9a8fe9712e4e07b81e09c9fc35"} Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.725777 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78bf527499da443b25c396067228ca4ba1a58c9a8fe9712e4e07b81e09c9fc35" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.745548 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.812950 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.813131 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.813201 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.813268 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rl62\" (UniqueName: \"kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.813332 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.813458 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data\") pod \"30d03346-49dd-43dd-a883-a970c0fe57a4\" (UID: \"30d03346-49dd-43dd-a883-a970c0fe57a4\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.815734 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs" (OuterVolumeSpecName: "logs") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.826791 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62" (OuterVolumeSpecName: "kube-api-access-8rl62") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "kube-api-access-8rl62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.840781 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.847817 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.850223 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.878279 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data" (OuterVolumeSpecName: "config-data") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.910707 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "30d03346-49dd-43dd-a883-a970c0fe57a4" (UID: "30d03346-49dd-43dd-a883-a970c0fe57a4"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.915983 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pxs\" (UniqueName: \"kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs\") pod \"a27efcc0-c658-4771-8c7c-ab39b0318d81\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916070 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle\") pod \"a27efcc0-c658-4771-8c7c-ab39b0318d81\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916117 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls\") pod \"a27efcc0-c658-4771-8c7c-ab39b0318d81\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916157 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs\") pod \"a27efcc0-c658-4771-8c7c-ab39b0318d81\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916209 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data\") pod \"a27efcc0-c658-4771-8c7c-ab39b0318d81\" (UID: \"a27efcc0-c658-4771-8c7c-ab39b0318d81\") " Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916472 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs" (OuterVolumeSpecName: "logs") pod "a27efcc0-c658-4771-8c7c-ab39b0318d81" (UID: "a27efcc0-c658-4771-8c7c-ab39b0318d81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916849 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916873 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916883 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916893 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a27efcc0-c658-4771-8c7c-ab39b0318d81-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916903 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d03346-49dd-43dd-a883-a970c0fe57a4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916912 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/30d03346-49dd-43dd-a883-a970c0fe57a4-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.916921 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rl62\" (UniqueName: \"kubernetes.io/projected/30d03346-49dd-43dd-a883-a970c0fe57a4-kube-api-access-8rl62\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.919380 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs" (OuterVolumeSpecName: "kube-api-access-l9pxs") pod "a27efcc0-c658-4771-8c7c-ab39b0318d81" (UID: "a27efcc0-c658-4771-8c7c-ab39b0318d81"). InnerVolumeSpecName "kube-api-access-l9pxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.940517 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a27efcc0-c658-4771-8c7c-ab39b0318d81" (UID: "a27efcc0-c658-4771-8c7c-ab39b0318d81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.960414 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data" (OuterVolumeSpecName: "config-data") pod "a27efcc0-c658-4771-8c7c-ab39b0318d81" (UID: "a27efcc0-c658-4771-8c7c-ab39b0318d81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:21 crc kubenswrapper[5023]: I0219 08:31:21.967628 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a27efcc0-c658-4771-8c7c-ab39b0318d81" (UID: "a27efcc0-c658-4771-8c7c-ab39b0318d81"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.018778 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.018814 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pxs\" (UniqueName: \"kubernetes.io/projected/a27efcc0-c658-4771-8c7c-ab39b0318d81-kube-api-access-l9pxs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.018826 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.018835 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a27efcc0-c658-4771-8c7c-ab39b0318d81-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.684286 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.224:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.684346 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.224:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.738579 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.739107 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.741846 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a27efcc0-c658-4771-8c7c-ab39b0318d81","Type":"ContainerDied","Data":"2b644126e2f16f6c6bebdcdd81b5c8a7907fd863c953f0bbaafa6b261474feb4"} Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.741900 5023 scope.go:117] "RemoveContainer" containerID="d97ae386b6edf713659982b7dd4b7ddd5d027d084396a1dd6df717eef98ba083" Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.777793 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.784130 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.797720 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:22 crc kubenswrapper[5023]: I0219 08:31:22.807731 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217296 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-7vkvp"] Feb 19 08:31:23 crc kubenswrapper[5023]: E0219 08:31:23.217671 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerName="watcher-applier" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217691 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerName="watcher-applier" Feb 19 08:31:23 crc kubenswrapper[5023]: E0219 08:31:23.217712 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d03346-49dd-43dd-a883-a970c0fe57a4" containerName="watcher-decision-engine" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217721 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d03346-49dd-43dd-a883-a970c0fe57a4" containerName="watcher-decision-engine" Feb 19 08:31:23 crc kubenswrapper[5023]: E0219 08:31:23.217735 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24f54b4-1f4c-4c16-818e-8ce415d1216d" containerName="mariadb-account-delete" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217743 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24f54b4-1f4c-4c16-818e-8ce415d1216d" containerName="mariadb-account-delete" Feb 19 08:31:23 crc kubenswrapper[5023]: E0219 08:31:23.217758 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-kuttl-api-log" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217765 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-kuttl-api-log" Feb 19 08:31:23 crc kubenswrapper[5023]: E0219 08:31:23.217789 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-api" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.217798 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-api" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218001 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24f54b4-1f4c-4c16-818e-8ce415d1216d" containerName="mariadb-account-delete" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218022 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d03346-49dd-43dd-a883-a970c0fe57a4" containerName="watcher-decision-engine" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218037 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-kuttl-api-log" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218051 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" containerName="watcher-applier" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218062 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29d9751-3f3b-4b2e-a1a5-cbe7b3bbac07" containerName="watcher-api" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.218800 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.229304 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7vkvp"] Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.316604 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp"] Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.317727 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.322179 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.325829 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp"] Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.338507 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.338587 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj8qw\" (UniqueName: \"kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.440978 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.441407 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.441473 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9x98\" (UniqueName: \"kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.441530 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj8qw\" (UniqueName: \"kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.442612 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.471586 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj8qw\" (UniqueName: \"kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw\") pod \"watcher-db-create-7vkvp\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.494665 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d03346-49dd-43dd-a883-a970c0fe57a4" path="/var/lib/kubelet/pods/30d03346-49dd-43dd-a883-a970c0fe57a4/volumes" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.495255 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27efcc0-c658-4771-8c7c-ab39b0318d81" path="/var/lib/kubelet/pods/a27efcc0-c658-4771-8c7c-ab39b0318d81/volumes" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.535820 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.542533 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.542721 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9x98\" (UniqueName: \"kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.544034 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.567118 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9x98\" (UniqueName: \"kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98\") pod \"watcher-cbe6-account-create-update-k9wjp\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.650018 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.785801 5023 generic.go:334] "Generic (PLEG): container finished" podID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerID="4b22dea68bda971bc10bc2976b496ca78b551e07e4d0f9411683c959b86f1d9b" exitCode=0 Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.785849 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerDied","Data":"4b22dea68bda971bc10bc2976b496ca78b551e07e4d0f9411683c959b86f1d9b"} Feb 19 08:31:23 crc kubenswrapper[5023]: I0219 08:31:23.999602 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7vkvp"] Feb 19 08:31:24 crc kubenswrapper[5023]: W0219 08:31:24.003734 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d29ed7_687b_47bf_bc4b_b2466e0cb913.slice/crio-3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6 WatchSource:0}: Error finding container 3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6: Status 404 returned error can't find the container with id 3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6 Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.019020 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051003 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051039 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051078 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051103 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051202 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzdwq\" (UniqueName: \"kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051218 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051238 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.051272 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts\") pod \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\" (UID: \"27778914-7f8d-4c26-9e77-a93ccd6b04e1\") " Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.052510 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.057449 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts" (OuterVolumeSpecName: "scripts") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.058165 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.064403 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq" (OuterVolumeSpecName: "kube-api-access-zzdwq") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "kube-api-access-zzdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.094877 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.145983 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154313 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzdwq\" (UniqueName: \"kubernetes.io/projected/27778914-7f8d-4c26-9e77-a93ccd6b04e1-kube-api-access-zzdwq\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154350 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154361 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154372 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154383 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.154392 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27778914-7f8d-4c26-9e77-a93ccd6b04e1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.170790 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: W0219 08:31:24.170968 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48d75349_c69a_4f53_938a_8d70833ee4d1.slice/crio-17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684 WatchSource:0}: Error finding container 17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684: Status 404 returned error can't find the container with id 17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684 Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.174699 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp"] Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.184374 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data" (OuterVolumeSpecName: "config-data") pod "27778914-7f8d-4c26-9e77-a93ccd6b04e1" (UID: "27778914-7f8d-4c26-9e77-a93ccd6b04e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.255735 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.255777 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27778914-7f8d-4c26-9e77-a93ccd6b04e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.795222 5023 generic.go:334] "Generic (PLEG): container finished" podID="48d75349-c69a-4f53-938a-8d70833ee4d1" containerID="651295a8ce271fe2fa3d268b992d1436967927f596688fd7393d7defe2023c16" exitCode=0 Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.795307 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" event={"ID":"48d75349-c69a-4f53-938a-8d70833ee4d1","Type":"ContainerDied","Data":"651295a8ce271fe2fa3d268b992d1436967927f596688fd7393d7defe2023c16"} Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.795340 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" event={"ID":"48d75349-c69a-4f53-938a-8d70833ee4d1","Type":"ContainerStarted","Data":"17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684"} Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.797051 5023 generic.go:334] "Generic (PLEG): container finished" podID="16d29ed7-687b-47bf-bc4b-b2466e0cb913" containerID="1f9dcdf7e3ab927e4ea5acf1fd53948d16e533e72a489bf059502c7fc6896a4d" exitCode=0 Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.797104 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7vkvp" event={"ID":"16d29ed7-687b-47bf-bc4b-b2466e0cb913","Type":"ContainerDied","Data":"1f9dcdf7e3ab927e4ea5acf1fd53948d16e533e72a489bf059502c7fc6896a4d"} Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.797121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7vkvp" event={"ID":"16d29ed7-687b-47bf-bc4b-b2466e0cb913","Type":"ContainerStarted","Data":"3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6"} Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.800313 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"27778914-7f8d-4c26-9e77-a93ccd6b04e1","Type":"ContainerDied","Data":"a3ad05b2b2fe6c044a3bf3aa029ed7c2717acc76421f7e1c830bc5a9773237aa"} Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.800365 5023 scope.go:117] "RemoveContainer" containerID="1c3cd2bae039e498cd95c70570e1d051f37b9fea80eae30052e6b45a2872ed86" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.800422 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.839912 5023 scope.go:117] "RemoveContainer" containerID="5318d543a1d4c77970c185ce94c4468b7a85ced1281bb2d867db8e7a2fb41c16" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.901467 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.901901 5023 scope.go:117] "RemoveContainer" containerID="4b22dea68bda971bc10bc2976b496ca78b551e07e4d0f9411683c959b86f1d9b" Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.924224 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:24 crc kubenswrapper[5023]: I0219 08:31:24.999094 5023 scope.go:117] "RemoveContainer" containerID="52a3e9a5c025ab504d072a7c0e1e6c2d847c91e28309cd9b473f0db3f660ec1b" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.017393 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:25 crc kubenswrapper[5023]: E0219 08:31:25.017880 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="proxy-httpd" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.017904 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="proxy-httpd" Feb 19 08:31:25 crc kubenswrapper[5023]: E0219 08:31:25.017918 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="sg-core" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.017926 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="sg-core" Feb 19 08:31:25 crc kubenswrapper[5023]: E0219 08:31:25.017944 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-notification-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.017951 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-notification-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: E0219 08:31:25.017983 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-central-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.017991 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-central-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.018164 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-central-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.018190 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="proxy-httpd" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.018202 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="ceilometer-notification-agent" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.018215 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" containerName="sg-core" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.019809 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.023400 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.023685 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.023902 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.024689 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077693 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077769 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077821 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077860 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077895 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077919 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.077976 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.078003 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sdzl\" (UniqueName: \"kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.178970 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179059 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179079 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179105 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179125 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179173 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179193 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sdzl\" (UniqueName: \"kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179212 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179877 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.179953 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.194456 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.194533 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.194599 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.194978 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.195752 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.198948 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sdzl\" (UniqueName: \"kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl\") pod \"ceilometer-0\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.336442 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.490323 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27778914-7f8d-4c26-9e77-a93ccd6b04e1" path="/var/lib/kubelet/pods/27778914-7f8d-4c26-9e77-a93ccd6b04e1/volumes" Feb 19 08:31:25 crc kubenswrapper[5023]: W0219 08:31:25.845237 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e8accd_ce35_4253_b2b8_8b77577dce99.slice/crio-bf3fcfced322a890bfe9d0172e7b8b7c39e42965d491db7994707c8312916f70 WatchSource:0}: Error finding container bf3fcfced322a890bfe9d0172e7b8b7c39e42965d491db7994707c8312916f70: Status 404 returned error can't find the container with id bf3fcfced322a890bfe9d0172e7b8b7c39e42965d491db7994707c8312916f70 Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.849903 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:25 crc kubenswrapper[5023]: I0219 08:31:25.854116 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.238660 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.244225 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.302980 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj8qw\" (UniqueName: \"kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw\") pod \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.303077 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts\") pod \"48d75349-c69a-4f53-938a-8d70833ee4d1\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.303159 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts\") pod \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\" (UID: \"16d29ed7-687b-47bf-bc4b-b2466e0cb913\") " Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.303218 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9x98\" (UniqueName: \"kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98\") pod \"48d75349-c69a-4f53-938a-8d70833ee4d1\" (UID: \"48d75349-c69a-4f53-938a-8d70833ee4d1\") " Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.304822 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48d75349-c69a-4f53-938a-8d70833ee4d1" (UID: "48d75349-c69a-4f53-938a-8d70833ee4d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.304912 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16d29ed7-687b-47bf-bc4b-b2466e0cb913" (UID: "16d29ed7-687b-47bf-bc4b-b2466e0cb913"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.308872 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98" (OuterVolumeSpecName: "kube-api-access-n9x98") pod "48d75349-c69a-4f53-938a-8d70833ee4d1" (UID: "48d75349-c69a-4f53-938a-8d70833ee4d1"). InnerVolumeSpecName "kube-api-access-n9x98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.308987 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw" (OuterVolumeSpecName: "kube-api-access-hj8qw") pod "16d29ed7-687b-47bf-bc4b-b2466e0cb913" (UID: "16d29ed7-687b-47bf-bc4b-b2466e0cb913"). InnerVolumeSpecName "kube-api-access-hj8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.405007 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9x98\" (UniqueName: \"kubernetes.io/projected/48d75349-c69a-4f53-938a-8d70833ee4d1-kube-api-access-n9x98\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.405043 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj8qw\" (UniqueName: \"kubernetes.io/projected/16d29ed7-687b-47bf-bc4b-b2466e0cb913-kube-api-access-hj8qw\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.405056 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d75349-c69a-4f53-938a-8d70833ee4d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.405068 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d29ed7-687b-47bf-bc4b-b2466e0cb913-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.818646 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" event={"ID":"48d75349-c69a-4f53-938a-8d70833ee4d1","Type":"ContainerDied","Data":"17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684"} Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.818987 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e65609c83efd8f4a9c2c7b7177147f7eff0680efc4f0ead9811b0ba9472684" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.818712 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.820222 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7vkvp" event={"ID":"16d29ed7-687b-47bf-bc4b-b2466e0cb913","Type":"ContainerDied","Data":"3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6"} Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.820250 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b340662d2591fb306ca95909db333ec7c2260a610bcd33529f854386595a5e6" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.820304 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7vkvp" Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.827675 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerStarted","Data":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} Feb 19 08:31:26 crc kubenswrapper[5023]: I0219 08:31:26.827715 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerStarted","Data":"bf3fcfced322a890bfe9d0172e7b8b7c39e42965d491db7994707c8312916f70"} Feb 19 08:31:27 crc kubenswrapper[5023]: I0219 08:31:27.477282 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:31:27 crc kubenswrapper[5023]: E0219 08:31:27.477535 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:31:27 crc kubenswrapper[5023]: I0219 08:31:27.853613 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerStarted","Data":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} Feb 19 08:31:27 crc kubenswrapper[5023]: I0219 08:31:27.853866 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerStarted","Data":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.627644 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kv49c"] Feb 19 08:31:28 crc kubenswrapper[5023]: E0219 08:31:28.628268 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d75349-c69a-4f53-938a-8d70833ee4d1" containerName="mariadb-account-create-update" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.628282 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d75349-c69a-4f53-938a-8d70833ee4d1" containerName="mariadb-account-create-update" Feb 19 08:31:28 crc kubenswrapper[5023]: E0219 08:31:28.628311 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d29ed7-687b-47bf-bc4b-b2466e0cb913" containerName="mariadb-database-create" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.628318 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d29ed7-687b-47bf-bc4b-b2466e0cb913" containerName="mariadb-database-create" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.628462 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d75349-c69a-4f53-938a-8d70833ee4d1" containerName="mariadb-account-create-update" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.628479 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d29ed7-687b-47bf-bc4b-b2466e0cb913" containerName="mariadb-database-create" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.629115 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.631646 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.631895 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cmr25" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.637790 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kv49c"] Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.740219 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.740279 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.740425 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssb6\" (UniqueName: \"kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.740637 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.841924 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.842036 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.842063 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.842096 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mssb6\" (UniqueName: \"kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.846102 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.846131 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.846160 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.867953 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mssb6\" (UniqueName: \"kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6\") pod \"watcher-kuttl-db-sync-kv49c\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:28 crc kubenswrapper[5023]: I0219 08:31:28.953447 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.493940 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kv49c"] Feb 19 08:31:29 crc kubenswrapper[5023]: W0219 08:31:29.498765 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod897dc027_ddad_42fc_ad81_fa4a5b7c52ad.slice/crio-d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10 WatchSource:0}: Error finding container d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10: Status 404 returned error can't find the container with id d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10 Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.874961 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" event={"ID":"897dc027-ddad-42fc-ad81-fa4a5b7c52ad","Type":"ContainerStarted","Data":"63a64de1890df2bc32c4973d7e16bf6d37b570bae5d98bbaa9cc865c84f6a946"} Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.875001 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" event={"ID":"897dc027-ddad-42fc-ad81-fa4a5b7c52ad","Type":"ContainerStarted","Data":"d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10"} Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.878908 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerStarted","Data":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.879089 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:29 crc kubenswrapper[5023]: I0219 08:31:29.890430 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" podStartSLOduration=1.890407041 podStartE2EDuration="1.890407041s" podCreationTimestamp="2026-02-19 08:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:29.887987126 +0000 UTC m=+1847.545106094" watchObservedRunningTime="2026-02-19 08:31:29.890407041 +0000 UTC m=+1847.547525999" Feb 19 08:31:30 crc kubenswrapper[5023]: I0219 08:31:30.812025 5023 scope.go:117] "RemoveContainer" containerID="ce8862942dbe5381269645a4ca9e70fc1bee2d4282900dc4bb71343766fd619b" Feb 19 08:31:30 crc kubenswrapper[5023]: I0219 08:31:30.831277 5023 scope.go:117] "RemoveContainer" containerID="bd7b11302154249b241025202eb6e84dc3959a0426143162c18b884992596735" Feb 19 08:31:30 crc kubenswrapper[5023]: I0219 08:31:30.871611 5023 scope.go:117] "RemoveContainer" containerID="ce9e08f4a7334dda4fb865562f1a3e9634566ad5e16bf6d7e9318ecd875f7527" Feb 19 08:31:32 crc kubenswrapper[5023]: I0219 08:31:32.917163 5023 generic.go:334] "Generic (PLEG): container finished" podID="897dc027-ddad-42fc-ad81-fa4a5b7c52ad" containerID="63a64de1890df2bc32c4973d7e16bf6d37b570bae5d98bbaa9cc865c84f6a946" exitCode=0 Feb 19 08:31:32 crc kubenswrapper[5023]: I0219 08:31:32.917492 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" event={"ID":"897dc027-ddad-42fc-ad81-fa4a5b7c52ad","Type":"ContainerDied","Data":"63a64de1890df2bc32c4973d7e16bf6d37b570bae5d98bbaa9cc865c84f6a946"} Feb 19 08:31:32 crc kubenswrapper[5023]: I0219 08:31:32.941518 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=5.595064861 podStartE2EDuration="8.941495677s" podCreationTimestamp="2026-02-19 08:31:24 +0000 UTC" firstStartedPulling="2026-02-19 08:31:25.853892429 +0000 UTC m=+1843.511011377" lastFinishedPulling="2026-02-19 08:31:29.200323255 +0000 UTC m=+1846.857442193" observedRunningTime="2026-02-19 08:31:29.916469124 +0000 UTC m=+1847.573588072" watchObservedRunningTime="2026-02-19 08:31:32.941495677 +0000 UTC m=+1850.598614625" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.332164 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.461611 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle\") pod \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.461859 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssb6\" (UniqueName: \"kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6\") pod \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.461913 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data\") pod \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.461972 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data\") pod \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\" (UID: \"897dc027-ddad-42fc-ad81-fa4a5b7c52ad\") " Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.480824 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6" (OuterVolumeSpecName: "kube-api-access-mssb6") pod "897dc027-ddad-42fc-ad81-fa4a5b7c52ad" (UID: "897dc027-ddad-42fc-ad81-fa4a5b7c52ad"). InnerVolumeSpecName "kube-api-access-mssb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.480917 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "897dc027-ddad-42fc-ad81-fa4a5b7c52ad" (UID: "897dc027-ddad-42fc-ad81-fa4a5b7c52ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.514788 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "897dc027-ddad-42fc-ad81-fa4a5b7c52ad" (UID: "897dc027-ddad-42fc-ad81-fa4a5b7c52ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.544767 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data" (OuterVolumeSpecName: "config-data") pod "897dc027-ddad-42fc-ad81-fa4a5b7c52ad" (UID: "897dc027-ddad-42fc-ad81-fa4a5b7c52ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.563781 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mssb6\" (UniqueName: \"kubernetes.io/projected/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-kube-api-access-mssb6\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.563811 5023 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.563821 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.563832 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/897dc027-ddad-42fc-ad81-fa4a5b7c52ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.940139 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" event={"ID":"897dc027-ddad-42fc-ad81-fa4a5b7c52ad","Type":"ContainerDied","Data":"d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10"} Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.940213 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83caae904b684e47d46d75c8b71bb47fe4e138550707340166865cae22e2f10" Feb 19 08:31:34 crc kubenswrapper[5023]: I0219 08:31:34.940253 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kv49c" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.365192 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: E0219 08:31:35.368428 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897dc027-ddad-42fc-ad81-fa4a5b7c52ad" containerName="watcher-kuttl-db-sync" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.368456 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="897dc027-ddad-42fc-ad81-fa4a5b7c52ad" containerName="watcher-kuttl-db-sync" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.368918 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="897dc027-ddad-42fc-ad81-fa4a5b7c52ad" containerName="watcher-kuttl-db-sync" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.370052 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.375960 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cmr25" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.376577 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.391738 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.427102 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.433946 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.441475 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.459716 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.463445 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.464879 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.474785 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478014 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74w72\" (UniqueName: \"kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478064 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478096 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478126 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9bq\" (UniqueName: \"kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478155 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478196 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478220 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478247 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478263 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478432 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.478501 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.538460 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.555004 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.553472 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.557614 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583592 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74w72\" (UniqueName: \"kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583691 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583719 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583753 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583775 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583803 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9bq\" (UniqueName: \"kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583837 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583901 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdj28\" (UniqueName: \"kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583930 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583955 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.583984 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584015 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584043 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584068 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584093 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584154 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.584190 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.588047 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.588403 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.590819 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.591182 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.591427 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.591720 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.592057 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.594332 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.603953 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74w72\" (UniqueName: \"kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.605055 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9bq\" (UniqueName: \"kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq\") pod \"watcher-kuttl-applier-0\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.609777 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687695 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687717 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnzp8\" (UniqueName: \"kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687752 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687773 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687805 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687829 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687865 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687885 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdj28\" (UniqueName: \"kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687905 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687933 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.687950 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.689701 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.692953 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.699133 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.702880 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.704254 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.716415 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.726229 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdj28\" (UniqueName: \"kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28\") pod \"watcher-kuttl-api-1\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.753448 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.788823 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.788903 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.788965 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.789002 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.789026 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnzp8\" (UniqueName: \"kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.789056 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.792037 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.796205 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.803409 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.803674 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.805175 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.812559 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnzp8\" (UniqueName: \"kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.827062 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:35 crc kubenswrapper[5023]: I0219 08:31:35.884743 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.196331 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.316312 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.394287 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.439584 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:31:36 crc kubenswrapper[5023]: W0219 08:31:36.446942 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e096615_0d85_458f_8c45_29eddee745d7.slice/crio-f6e1956b8f98192f6ae923d2c99cb4299e55a02f3c55f0a953b2d4f0ae6ecbc9 WatchSource:0}: Error finding container f6e1956b8f98192f6ae923d2c99cb4299e55a02f3c55f0a953b2d4f0ae6ecbc9: Status 404 returned error can't find the container with id f6e1956b8f98192f6ae923d2c99cb4299e55a02f3c55f0a953b2d4f0ae6ecbc9 Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.975775 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6850e909-6998-4241-b3da-1af27d5663b6","Type":"ContainerStarted","Data":"a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.976062 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6850e909-6998-4241-b3da-1af27d5663b6","Type":"ContainerStarted","Data":"e75207413d2db2544d46c68b5a22f2a2ce1fe7138c2fe52cfaa14c811e1d448f"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.978109 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerStarted","Data":"e31e970ca83769b8d0a6013ddb9bb108af48b20c22a48aa095c37227e08bc9c8"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.978160 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerStarted","Data":"9f2d223fe9282839b13d70ab346f25af165478d419821018cda029a975154a9a"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.978173 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerStarted","Data":"85d2c679eb1f671c32b442ef89315b181a3273a11b251fc34b7bd6a1ad054724"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.978306 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.979564 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.250:9322/\": dial tcp 10.217.0.250:9322: connect: connection refused" Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.980905 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e096615-0d85-458f-8c45-29eddee745d7","Type":"ContainerStarted","Data":"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.981110 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e096615-0d85-458f-8c45-29eddee745d7","Type":"ContainerStarted","Data":"f6e1956b8f98192f6ae923d2c99cb4299e55a02f3c55f0a953b2d4f0ae6ecbc9"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.982718 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerStarted","Data":"3a1086991515c08f525a45693b212952984ae5853bfb2367d92480e3878f2f4e"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.982744 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerStarted","Data":"514620816f92fb5fc033ffcf1b30a8ae00be3de2646a9f9dbbe8ea34257ec85a"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.982754 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerStarted","Data":"1efda38e480044951fcdaa029223d4b6b8536b1f626e989a2190c3f9bddf0e54"} Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.984279 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:36 crc kubenswrapper[5023]: I0219 08:31:36.984348 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.249:9322/\": dial tcp 10.217.0.249:9322: connect: connection refused" Feb 19 08:31:37 crc kubenswrapper[5023]: I0219 08:31:37.003437 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.003415424 podStartE2EDuration="2.003415424s" podCreationTimestamp="2026-02-19 08:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:36.998189715 +0000 UTC m=+1854.655308663" watchObservedRunningTime="2026-02-19 08:31:37.003415424 +0000 UTC m=+1854.660534372" Feb 19 08:31:37 crc kubenswrapper[5023]: I0219 08:31:37.024926 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.024907436 podStartE2EDuration="2.024907436s" podCreationTimestamp="2026-02-19 08:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:37.023976561 +0000 UTC m=+1854.681095509" watchObservedRunningTime="2026-02-19 08:31:37.024907436 +0000 UTC m=+1854.682026384" Feb 19 08:31:37 crc kubenswrapper[5023]: I0219 08:31:37.040552 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.040532042 podStartE2EDuration="2.040532042s" podCreationTimestamp="2026-02-19 08:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:37.038872258 +0000 UTC m=+1854.695991206" watchObservedRunningTime="2026-02-19 08:31:37.040532042 +0000 UTC m=+1854.697650980" Feb 19 08:31:37 crc kubenswrapper[5023]: I0219 08:31:37.076063 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.0760445770000002 podStartE2EDuration="2.076044577s" podCreationTimestamp="2026-02-19 08:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:37.072715228 +0000 UTC m=+1854.729834176" watchObservedRunningTime="2026-02-19 08:31:37.076044577 +0000 UTC m=+1854.733163525" Feb 19 08:31:40 crc kubenswrapper[5023]: I0219 08:31:40.332073 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:40 crc kubenswrapper[5023]: I0219 08:31:40.418471 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:40 crc kubenswrapper[5023]: I0219 08:31:40.693268 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:40 crc kubenswrapper[5023]: I0219 08:31:40.754882 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:40 crc kubenswrapper[5023]: I0219 08:31:40.828780 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:42 crc kubenswrapper[5023]: I0219 08:31:42.476847 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:31:42 crc kubenswrapper[5023]: E0219 08:31:42.477076 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.694247 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.726372 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.755231 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.761008 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.829163 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.833046 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.886415 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:45 crc kubenswrapper[5023]: I0219 08:31:45.937240 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:46 crc kubenswrapper[5023]: I0219 08:31:46.076211 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:46 crc kubenswrapper[5023]: I0219 08:31:46.081098 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:31:46 crc kubenswrapper[5023]: I0219 08:31:46.083772 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:31:46 crc kubenswrapper[5023]: I0219 08:31:46.103004 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:31:46 crc kubenswrapper[5023]: I0219 08:31:46.113592 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.148089 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.148936 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-central-agent" containerID="cri-o://65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" gracePeriod=30 Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.149053 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-notification-agent" containerID="cri-o://3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" gracePeriod=30 Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.149068 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="sg-core" containerID="cri-o://8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" gracePeriod=30 Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.149079 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="proxy-httpd" containerID="cri-o://49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" gracePeriod=30 Feb 19 08:31:48 crc kubenswrapper[5023]: I0219 08:31:48.155959 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.246:3000/\": EOF" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.110832 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134313 5023 generic.go:334] "Generic (PLEG): container finished" podID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" exitCode=0 Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134363 5023 generic.go:334] "Generic (PLEG): container finished" podID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" exitCode=2 Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134374 5023 generic.go:334] "Generic (PLEG): container finished" podID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" exitCode=0 Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134383 5023 generic.go:334] "Generic (PLEG): container finished" podID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" exitCode=0 Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134408 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerDied","Data":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134454 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerDied","Data":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134465 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerDied","Data":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134473 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerDied","Data":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134482 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2e8accd-ce35-4253-b2b8-8b77577dce99","Type":"ContainerDied","Data":"bf3fcfced322a890bfe9d0172e7b8b7c39e42965d491db7994707c8312916f70"} Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134514 5023 scope.go:117] "RemoveContainer" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.134775 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.164823 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.164905 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.164940 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165012 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165115 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sdzl\" (UniqueName: \"kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165140 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165199 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165222 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs\") pod \"e2e8accd-ce35-4253-b2b8-8b77577dce99\" (UID: \"e2e8accd-ce35-4253-b2b8-8b77577dce99\") " Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.165921 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.166214 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.170144 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts" (OuterVolumeSpecName: "scripts") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.175453 5023 scope.go:117] "RemoveContainer" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.179331 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl" (OuterVolumeSpecName: "kube-api-access-7sdzl") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "kube-api-access-7sdzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.192667 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.215870 5023 scope.go:117] "RemoveContainer" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.217976 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.239306 5023 scope.go:117] "RemoveContainer" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.249935 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data" (OuterVolumeSpecName: "config-data") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.251894 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2e8accd-ce35-4253-b2b8-8b77577dce99" (UID: "e2e8accd-ce35-4253-b2b8-8b77577dce99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.263334 5023 scope.go:117] "RemoveContainer" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.263717 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": container with ID starting with 49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81 not found: ID does not exist" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.263755 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} err="failed to get container status \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": rpc error: code = NotFound desc = could not find container \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": container with ID starting with 49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.263778 5023 scope.go:117] "RemoveContainer" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.264029 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": container with ID starting with 8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768 not found: ID does not exist" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264051 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} err="failed to get container status \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": rpc error: code = NotFound desc = could not find container \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": container with ID starting with 8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264064 5023 scope.go:117] "RemoveContainer" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.264338 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": container with ID starting with 3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956 not found: ID does not exist" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264360 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} err="failed to get container status \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": rpc error: code = NotFound desc = could not find container \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": container with ID starting with 3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264372 5023 scope.go:117] "RemoveContainer" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.264671 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": container with ID starting with 65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6 not found: ID does not exist" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264693 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} err="failed to get container status \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": rpc error: code = NotFound desc = could not find container \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": container with ID starting with 65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264706 5023 scope.go:117] "RemoveContainer" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.264996 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} err="failed to get container status \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": rpc error: code = NotFound desc = could not find container \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": container with ID starting with 49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.265017 5023 scope.go:117] "RemoveContainer" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.265512 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} err="failed to get container status \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": rpc error: code = NotFound desc = could not find container \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": container with ID starting with 8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.265530 5023 scope.go:117] "RemoveContainer" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.265827 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} err="failed to get container status \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": rpc error: code = NotFound desc = could not find container \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": container with ID starting with 3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.265842 5023 scope.go:117] "RemoveContainer" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266229 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} err="failed to get container status \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": rpc error: code = NotFound desc = could not find container \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": container with ID starting with 65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266248 5023 scope.go:117] "RemoveContainer" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266462 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} err="failed to get container status \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": rpc error: code = NotFound desc = could not find container \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": container with ID starting with 49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266480 5023 scope.go:117] "RemoveContainer" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266819 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} err="failed to get container status \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": rpc error: code = NotFound desc = could not find container \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": container with ID starting with 8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.266841 5023 scope.go:117] "RemoveContainer" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267142 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} err="failed to get container status \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": rpc error: code = NotFound desc = could not find container \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": container with ID starting with 3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267189 5023 scope.go:117] "RemoveContainer" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267420 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267440 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sdzl\" (UniqueName: \"kubernetes.io/projected/e2e8accd-ce35-4253-b2b8-8b77577dce99-kube-api-access-7sdzl\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267451 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267460 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267468 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267478 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267503 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2e8accd-ce35-4253-b2b8-8b77577dce99-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267503 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} err="failed to get container status \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": rpc error: code = NotFound desc = could not find container \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": container with ID starting with 65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267549 5023 scope.go:117] "RemoveContainer" containerID="49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267512 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e8accd-ce35-4253-b2b8-8b77577dce99-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267917 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81"} err="failed to get container status \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": rpc error: code = NotFound desc = could not find container \"49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81\": container with ID starting with 49e108e59cbccdf68318d7a6b345c9cf675016e89a80173a3c1e7f0488c06a81 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.267946 5023 scope.go:117] "RemoveContainer" containerID="8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.268221 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768"} err="failed to get container status \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": rpc error: code = NotFound desc = could not find container \"8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768\": container with ID starting with 8e772c07fc491489a8c54dab2163a4394e12926570f6addf066b0e76013be768 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.268242 5023 scope.go:117] "RemoveContainer" containerID="3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.268591 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956"} err="failed to get container status \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": rpc error: code = NotFound desc = could not find container \"3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956\": container with ID starting with 3b2178c81d144e59fe3b483987707a0015e5693498ac08271ecf644cd84e7956 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.268653 5023 scope.go:117] "RemoveContainer" containerID="65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.269185 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6"} err="failed to get container status \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": rpc error: code = NotFound desc = could not find container \"65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6\": container with ID starting with 65677643fbcabb49abc5c011b1ed666c98eeb25cc146da7b8522a7d507dbefb6 not found: ID does not exist" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.464054 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.469811 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.486604 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" path="/var/lib/kubelet/pods/e2e8accd-ce35-4253-b2b8-8b77577dce99/volumes" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.487476 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.487756 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="proxy-httpd" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.487773 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="proxy-httpd" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.487786 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-central-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.487794 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-central-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.487807 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-notification-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.487813 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-notification-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: E0219 08:31:49.487830 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="sg-core" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.487836 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="sg-core" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.488008 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-central-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.488019 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="sg-core" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.488037 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="proxy-httpd" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.488047 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e8accd-ce35-4253-b2b8-8b77577dce99" containerName="ceilometer-notification-agent" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.490355 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.493926 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.494313 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.494867 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.496588 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.572071 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xkn\" (UniqueName: \"kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.572136 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.572193 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.587000 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.587102 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.587136 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.587314 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.587367 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.688870 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.688945 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689023 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689054 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689074 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689113 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689134 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.689196 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xkn\" (UniqueName: \"kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.693405 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.693451 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.697001 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.697164 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.697258 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.698142 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.704986 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xkn\" (UniqueName: \"kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.706692 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts\") pod \"ceilometer-0\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:49 crc kubenswrapper[5023]: I0219 08:31:49.807440 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:50 crc kubenswrapper[5023]: I0219 08:31:50.405390 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:31:50 crc kubenswrapper[5023]: W0219 08:31:50.418891 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ff582c_22eb_4737_985d_51a02b38dcca.slice/crio-7fd7987e0810f9cb6f3072b93ff959acc8fa81b5d47cefd8a399f87fa4c652cc WatchSource:0}: Error finding container 7fd7987e0810f9cb6f3072b93ff959acc8fa81b5d47cefd8a399f87fa4c652cc: Status 404 returned error can't find the container with id 7fd7987e0810f9cb6f3072b93ff959acc8fa81b5d47cefd8a399f87fa4c652cc Feb 19 08:31:51 crc kubenswrapper[5023]: I0219 08:31:51.163185 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerStarted","Data":"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f"} Feb 19 08:31:51 crc kubenswrapper[5023]: I0219 08:31:51.163493 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerStarted","Data":"7fd7987e0810f9cb6f3072b93ff959acc8fa81b5d47cefd8a399f87fa4c652cc"} Feb 19 08:31:52 crc kubenswrapper[5023]: I0219 08:31:52.173643 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerStarted","Data":"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d"} Feb 19 08:31:53 crc kubenswrapper[5023]: I0219 08:31:53.183428 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerStarted","Data":"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5"} Feb 19 08:31:54 crc kubenswrapper[5023]: I0219 08:31:54.195171 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerStarted","Data":"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957"} Feb 19 08:31:54 crc kubenswrapper[5023]: I0219 08:31:54.195457 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:31:56 crc kubenswrapper[5023]: I0219 08:31:56.477103 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:31:56 crc kubenswrapper[5023]: E0219 08:31:56.478258 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:31:57 crc kubenswrapper[5023]: I0219 08:31:57.914871 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=5.621256655 podStartE2EDuration="8.914848924s" podCreationTimestamp="2026-02-19 08:31:49 +0000 UTC" firstStartedPulling="2026-02-19 08:31:50.421063866 +0000 UTC m=+1868.078182814" lastFinishedPulling="2026-02-19 08:31:53.714656135 +0000 UTC m=+1871.371775083" observedRunningTime="2026-02-19 08:31:54.219224194 +0000 UTC m=+1871.876343152" watchObservedRunningTime="2026-02-19 08:31:57.914848924 +0000 UTC m=+1875.571967872" Feb 19 08:31:57 crc kubenswrapper[5023]: I0219 08:31:57.915741 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:31:57 crc kubenswrapper[5023]: I0219 08:31:57.917223 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:57 crc kubenswrapper[5023]: I0219 08:31:57.930476 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.060808 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.061161 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnvc\" (UniqueName: \"kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.061264 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.061424 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.061521 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.061656 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163352 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163422 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnvc\" (UniqueName: \"kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163453 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163531 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163555 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.163597 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.164040 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.170137 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.170140 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.171325 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.172205 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.178400 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnvc\" (UniqueName: \"kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc\") pod \"watcher-kuttl-api-2\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.240779 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:58 crc kubenswrapper[5023]: I0219 08:31:58.681691 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:31:58 crc kubenswrapper[5023]: W0219 08:31:58.684393 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd61fa58_2b53_4746_a52c_b4fb2e3feaf4.slice/crio-5d5bea834323c77590ed63366f5830ac25bbb58e8e54a9292e9efcb3270d6a0e WatchSource:0}: Error finding container 5d5bea834323c77590ed63366f5830ac25bbb58e8e54a9292e9efcb3270d6a0e: Status 404 returned error can't find the container with id 5d5bea834323c77590ed63366f5830ac25bbb58e8e54a9292e9efcb3270d6a0e Feb 19 08:31:59 crc kubenswrapper[5023]: I0219 08:31:59.235516 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerStarted","Data":"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a"} Feb 19 08:31:59 crc kubenswrapper[5023]: I0219 08:31:59.235575 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerStarted","Data":"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c"} Feb 19 08:31:59 crc kubenswrapper[5023]: I0219 08:31:59.235589 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerStarted","Data":"5d5bea834323c77590ed63366f5830ac25bbb58e8e54a9292e9efcb3270d6a0e"} Feb 19 08:31:59 crc kubenswrapper[5023]: I0219 08:31:59.235744 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:31:59 crc kubenswrapper[5023]: I0219 08:31:59.237066 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.253:9322/\": dial tcp 10.217.0.253:9322: connect: connection refused" Feb 19 08:32:02 crc kubenswrapper[5023]: I0219 08:32:02.407975 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:02 crc kubenswrapper[5023]: I0219 08:32:02.433919 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=5.433895927 podStartE2EDuration="5.433895927s" podCreationTimestamp="2026-02-19 08:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 08:31:59.268373087 +0000 UTC m=+1876.925492045" watchObservedRunningTime="2026-02-19 08:32:02.433895927 +0000 UTC m=+1880.091014875" Feb 19 08:32:03 crc kubenswrapper[5023]: I0219 08:32:03.242296 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:08 crc kubenswrapper[5023]: I0219 08:32:08.242391 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:08 crc kubenswrapper[5023]: I0219 08:32:08.248077 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:08 crc kubenswrapper[5023]: I0219 08:32:08.323639 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:08 crc kubenswrapper[5023]: I0219 08:32:08.478167 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:32:08 crc kubenswrapper[5023]: E0219 08:32:08.478470 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:32:09 crc kubenswrapper[5023]: I0219 08:32:09.416887 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:32:09 crc kubenswrapper[5023]: I0219 08:32:09.426724 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:32:09 crc kubenswrapper[5023]: I0219 08:32:09.426986 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-kuttl-api-log" containerID="cri-o://9f2d223fe9282839b13d70ab346f25af165478d419821018cda029a975154a9a" gracePeriod=30 Feb 19 08:32:09 crc kubenswrapper[5023]: I0219 08:32:09.427059 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-api" containerID="cri-o://e31e970ca83769b8d0a6013ddb9bb108af48b20c22a48aa095c37227e08bc9c8" gracePeriod=30 Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329219 5023 generic.go:334] "Generic (PLEG): container finished" podID="c708a586-e602-4936-a980-8dc881d3e36c" containerID="e31e970ca83769b8d0a6013ddb9bb108af48b20c22a48aa095c37227e08bc9c8" exitCode=0 Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329580 5023 generic.go:334] "Generic (PLEG): container finished" podID="c708a586-e602-4936-a980-8dc881d3e36c" containerID="9f2d223fe9282839b13d70ab346f25af165478d419821018cda029a975154a9a" exitCode=143 Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329321 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerDied","Data":"e31e970ca83769b8d0a6013ddb9bb108af48b20c22a48aa095c37227e08bc9c8"} Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329652 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerDied","Data":"9f2d223fe9282839b13d70ab346f25af165478d419821018cda029a975154a9a"} Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329802 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-kuttl-api-log" containerID="cri-o://6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" gracePeriod=30 Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.329895 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-api" containerID="cri-o://cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" gracePeriod=30 Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.411993 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512391 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512762 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512801 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512872 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512942 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.512979 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdj28\" (UniqueName: \"kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28\") pod \"c708a586-e602-4936-a980-8dc881d3e36c\" (UID: \"c708a586-e602-4936-a980-8dc881d3e36c\") " Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.515033 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs" (OuterVolumeSpecName: "logs") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.518928 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28" (OuterVolumeSpecName: "kube-api-access-sdj28") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "kube-api-access-sdj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.541903 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.556150 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.581076 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data" (OuterVolumeSpecName: "config-data") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.599458 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c708a586-e602-4936-a980-8dc881d3e36c" (UID: "c708a586-e602-4936-a980-8dc881d3e36c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614915 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614951 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614962 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614972 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c708a586-e602-4936-a980-8dc881d3e36c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614981 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c708a586-e602-4936-a980-8dc881d3e36c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:10 crc kubenswrapper[5023]: I0219 08:32:10.614991 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdj28\" (UniqueName: \"kubernetes.io/projected/c708a586-e602-4936-a980-8dc881d3e36c-kube-api-access-sdj28\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.203214 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.339901 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"c708a586-e602-4936-a980-8dc881d3e36c","Type":"ContainerDied","Data":"85d2c679eb1f671c32b442ef89315b181a3273a11b251fc34b7bd6a1ad054724"} Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.340220 5023 scope.go:117] "RemoveContainer" containerID="e31e970ca83769b8d0a6013ddb9bb108af48b20c22a48aa095c37227e08bc9c8" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.340383 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.344931 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerID="cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" exitCode=0 Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.344971 5023 generic.go:334] "Generic (PLEG): container finished" podID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerID="6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" exitCode=143 Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.344980 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.344993 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerDied","Data":"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a"} Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.345023 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerDied","Data":"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c"} Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.345036 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4","Type":"ContainerDied","Data":"5d5bea834323c77590ed63366f5830ac25bbb58e8e54a9292e9efcb3270d6a0e"} Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352010 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352185 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352237 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352265 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqnvc\" (UniqueName: \"kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352292 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352350 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data\") pod \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\" (UID: \"cd61fa58-2b53-4746-a52c-b4fb2e3feaf4\") " Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.352484 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs" (OuterVolumeSpecName: "logs") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.353051 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.357106 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc" (OuterVolumeSpecName: "kube-api-access-wqnvc") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "kube-api-access-wqnvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.362863 5023 scope.go:117] "RemoveContainer" containerID="9f2d223fe9282839b13d70ab346f25af165478d419821018cda029a975154a9a" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.381326 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.381581 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.387601 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.387679 5023 scope.go:117] "RemoveContainer" containerID="cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.387992 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.406577 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data" (OuterVolumeSpecName: "config-data") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.418257 5023 scope.go:117] "RemoveContainer" containerID="6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.426937 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" (UID: "cd61fa58-2b53-4746-a52c-b4fb2e3feaf4"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.438901 5023 scope.go:117] "RemoveContainer" containerID="cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" Feb 19 08:32:11 crc kubenswrapper[5023]: E0219 08:32:11.439387 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a\": container with ID starting with cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a not found: ID does not exist" containerID="cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.439430 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a"} err="failed to get container status \"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a\": rpc error: code = NotFound desc = could not find container \"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a\": container with ID starting with cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a not found: ID does not exist" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.439460 5023 scope.go:117] "RemoveContainer" containerID="6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" Feb 19 08:32:11 crc kubenswrapper[5023]: E0219 08:32:11.439873 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c\": container with ID starting with 6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c not found: ID does not exist" containerID="6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.439945 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c"} err="failed to get container status \"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c\": rpc error: code = NotFound desc = could not find container \"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c\": container with ID starting with 6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c not found: ID does not exist" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.439968 5023 scope.go:117] "RemoveContainer" containerID="cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.440550 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a"} err="failed to get container status \"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a\": rpc error: code = NotFound desc = could not find container \"cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a\": container with ID starting with cf95a87d3a84536b33c9eda1ec45542277949f9c8a5d5a539fb0fd18aad2665a not found: ID does not exist" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.440576 5023 scope.go:117] "RemoveContainer" containerID="6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.441256 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c"} err="failed to get container status \"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c\": rpc error: code = NotFound desc = could not find container \"6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c\": container with ID starting with 6deae7724e0e5ecb793a606939276471fe886b085a91a470aa3312b59112298c not found: ID does not exist" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.455748 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.455783 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.455793 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.455802 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqnvc\" (UniqueName: \"kubernetes.io/projected/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-kube-api-access-wqnvc\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.455811 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.487677 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c708a586-e602-4936-a980-8dc881d3e36c" path="/var/lib/kubelet/pods/c708a586-e602-4936-a980-8dc881d3e36c/volumes" Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.665949 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:32:11 crc kubenswrapper[5023]: I0219 08:32:11.672555 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 19 08:32:12 crc kubenswrapper[5023]: I0219 08:32:12.681928 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:32:12 crc kubenswrapper[5023]: I0219 08:32:12.682448 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-kuttl-api-log" containerID="cri-o://514620816f92fb5fc033ffcf1b30a8ae00be3de2646a9f9dbbe8ea34257ec85a" gracePeriod=30 Feb 19 08:32:12 crc kubenswrapper[5023]: I0219 08:32:12.682620 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-api" containerID="cri-o://3a1086991515c08f525a45693b212952984ae5853bfb2367d92480e3878f2f4e" gracePeriod=30 Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.374396 5023 generic.go:334] "Generic (PLEG): container finished" podID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerID="3a1086991515c08f525a45693b212952984ae5853bfb2367d92480e3878f2f4e" exitCode=0 Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.374429 5023 generic.go:334] "Generic (PLEG): container finished" podID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerID="514620816f92fb5fc033ffcf1b30a8ae00be3de2646a9f9dbbe8ea34257ec85a" exitCode=143 Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.374450 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerDied","Data":"3a1086991515c08f525a45693b212952984ae5853bfb2367d92480e3878f2f4e"} Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.374477 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerDied","Data":"514620816f92fb5fc033ffcf1b30a8ae00be3de2646a9f9dbbe8ea34257ec85a"} Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.485848 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" path="/var/lib/kubelet/pods/cd61fa58-2b53-4746-a52c-b4fb2e3feaf4/volumes" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.576953 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728538 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728596 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728669 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728710 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728743 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.728836 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74w72\" (UniqueName: \"kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72\") pod \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\" (UID: \"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43\") " Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.729994 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs" (OuterVolumeSpecName: "logs") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.734820 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72" (OuterVolumeSpecName: "kube-api-access-74w72") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "kube-api-access-74w72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.758956 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.762321 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.801845 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data" (OuterVolumeSpecName: "config-data") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.822071 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" (UID: "a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830264 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830302 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74w72\" (UniqueName: \"kubernetes.io/projected/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-kube-api-access-74w72\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830317 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830329 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830343 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.830353 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.909039 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kv49c"] Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.917508 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kv49c"] Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.974431 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.974685 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="6850e909-6998-4241-b3da-1af27d5663b6" containerName="watcher-applier" containerID="cri-o://a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" gracePeriod=30 Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.994591 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchercbe6-account-delete-hlv9h"] Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.994975 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.994991 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.995001 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995007 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.995017 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995022 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.995031 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995037 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.995051 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995056 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: E0219 08:32:13.995072 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995077 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995214 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995228 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995241 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995250 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995259 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c708a586-e602-4936-a980-8dc881d3e36c" containerName="watcher-kuttl-api-log" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995266 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd61fa58-2b53-4746-a52c-b4fb2e3feaf4" containerName="watcher-api" Feb 19 08:32:13 crc kubenswrapper[5023]: I0219 08:32:13.995851 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.001465 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.001727 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3e096615-0d85-458f-8c45-29eddee745d7" containerName="watcher-decision-engine" containerID="cri-o://32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23" gracePeriod=30 Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.015639 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercbe6-account-delete-hlv9h"] Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.134896 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blss7\" (UniqueName: \"kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.135057 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.235698 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blss7\" (UniqueName: \"kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.236076 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.236789 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.253548 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blss7\" (UniqueName: \"kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7\") pod \"watchercbe6-account-delete-hlv9h\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.320384 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.406932 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43","Type":"ContainerDied","Data":"1efda38e480044951fcdaa029223d4b6b8536b1f626e989a2190c3f9bddf0e54"} Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.406983 5023 scope.go:117] "RemoveContainer" containerID="3a1086991515c08f525a45693b212952984ae5853bfb2367d92480e3878f2f4e" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.407111 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.457107 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.461642 5023 scope.go:117] "RemoveContainer" containerID="514620816f92fb5fc033ffcf1b30a8ae00be3de2646a9f9dbbe8ea34257ec85a" Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.467935 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 19 08:32:14 crc kubenswrapper[5023]: I0219 08:32:14.915235 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercbe6-account-delete-hlv9h"] Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.505212 5023 generic.go:334] "Generic (PLEG): container finished" podID="6850e909-6998-4241-b3da-1af27d5663b6" containerID="a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" exitCode=0 Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.517601 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897dc027-ddad-42fc-ad81-fa4a5b7c52ad" path="/var/lib/kubelet/pods/897dc027-ddad-42fc-ad81-fa4a5b7c52ad/volumes" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.518179 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43" path="/var/lib/kubelet/pods/a1d77055-c23a-4dcf-a23b-b7cdfd6b1f43/volumes" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.518747 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6850e909-6998-4241-b3da-1af27d5663b6","Type":"ContainerDied","Data":"a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80"} Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.521238 5023 generic.go:334] "Generic (PLEG): container finished" podID="8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" containerID="eb8f1646d477f74d004d97e784640997a54053812da95c8d1b37c18e32d8b618" exitCode=0 Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.521290 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" event={"ID":"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e","Type":"ContainerDied","Data":"eb8f1646d477f74d004d97e784640997a54053812da95c8d1b37c18e32d8b618"} Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.521306 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" event={"ID":"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e","Type":"ContainerStarted","Data":"34fb61e1578c6863e91a23f6bbdfca990208e120997af37d621bd97a25d5136a"} Feb 19 08:32:15 crc kubenswrapper[5023]: E0219 08:32:15.697216 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80 is running failed: container process not found" containerID="a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:32:15 crc kubenswrapper[5023]: E0219 08:32:15.699363 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80 is running failed: container process not found" containerID="a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:32:15 crc kubenswrapper[5023]: E0219 08:32:15.700655 5023 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80 is running failed: container process not found" containerID="a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 08:32:15 crc kubenswrapper[5023]: E0219 08:32:15.700692 5023 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80 is running failed: container process not found" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="6850e909-6998-4241-b3da-1af27d5663b6" containerName="watcher-applier" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.702318 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.868663 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle\") pod \"6850e909-6998-4241-b3da-1af27d5663b6\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.868709 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data\") pod \"6850e909-6998-4241-b3da-1af27d5663b6\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.868788 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs\") pod \"6850e909-6998-4241-b3da-1af27d5663b6\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.868935 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv9bq\" (UniqueName: \"kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq\") pod \"6850e909-6998-4241-b3da-1af27d5663b6\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.868972 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls\") pod \"6850e909-6998-4241-b3da-1af27d5663b6\" (UID: \"6850e909-6998-4241-b3da-1af27d5663b6\") " Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.869742 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs" (OuterVolumeSpecName: "logs") pod "6850e909-6998-4241-b3da-1af27d5663b6" (UID: "6850e909-6998-4241-b3da-1af27d5663b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.875649 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq" (OuterVolumeSpecName: "kube-api-access-lv9bq") pod "6850e909-6998-4241-b3da-1af27d5663b6" (UID: "6850e909-6998-4241-b3da-1af27d5663b6"). InnerVolumeSpecName "kube-api-access-lv9bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.902476 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6850e909-6998-4241-b3da-1af27d5663b6" (UID: "6850e909-6998-4241-b3da-1af27d5663b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.934603 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data" (OuterVolumeSpecName: "config-data") pod "6850e909-6998-4241-b3da-1af27d5663b6" (UID: "6850e909-6998-4241-b3da-1af27d5663b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.951946 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6850e909-6998-4241-b3da-1af27d5663b6" (UID: "6850e909-6998-4241-b3da-1af27d5663b6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.971437 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv9bq\" (UniqueName: \"kubernetes.io/projected/6850e909-6998-4241-b3da-1af27d5663b6-kube-api-access-lv9bq\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.971480 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.971492 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.971504 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6850e909-6998-4241-b3da-1af27d5663b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.971519 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6850e909-6998-4241-b3da-1af27d5663b6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:15 crc kubenswrapper[5023]: I0219 08:32:15.992747 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.072826 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.072974 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnzp8\" (UniqueName: \"kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.073036 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.073061 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.073103 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.073129 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data\") pod \"3e096615-0d85-458f-8c45-29eddee745d7\" (UID: \"3e096615-0d85-458f-8c45-29eddee745d7\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.073989 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs" (OuterVolumeSpecName: "logs") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.077559 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8" (OuterVolumeSpecName: "kube-api-access-rnzp8") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "kube-api-access-rnzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.097435 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.109683 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.120810 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data" (OuterVolumeSpecName: "config-data") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.134722 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3e096615-0d85-458f-8c45-29eddee745d7" (UID: "3e096615-0d85-458f-8c45-29eddee745d7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174738 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174774 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnzp8\" (UniqueName: \"kubernetes.io/projected/3e096615-0d85-458f-8c45-29eddee745d7-kube-api-access-rnzp8\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174787 5023 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174801 5023 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174812 5023 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e096615-0d85-458f-8c45-29eddee745d7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.174823 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e096615-0d85-458f-8c45-29eddee745d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.538578 5023 generic.go:334] "Generic (PLEG): container finished" podID="3e096615-0d85-458f-8c45-29eddee745d7" containerID="32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23" exitCode=0 Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.538656 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e096615-0d85-458f-8c45-29eddee745d7","Type":"ContainerDied","Data":"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23"} Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.538686 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3e096615-0d85-458f-8c45-29eddee745d7","Type":"ContainerDied","Data":"f6e1956b8f98192f6ae923d2c99cb4299e55a02f3c55f0a953b2d4f0ae6ecbc9"} Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.538706 5023 scope.go:117] "RemoveContainer" containerID="32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.538818 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.543540 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6850e909-6998-4241-b3da-1af27d5663b6","Type":"ContainerDied","Data":"e75207413d2db2544d46c68b5a22f2a2ce1fe7138c2fe52cfaa14c811e1d448f"} Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.543638 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.564952 5023 scope.go:117] "RemoveContainer" containerID="32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23" Feb 19 08:32:16 crc kubenswrapper[5023]: E0219 08:32:16.565535 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23\": container with ID starting with 32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23 not found: ID does not exist" containerID="32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.565569 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23"} err="failed to get container status \"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23\": rpc error: code = NotFound desc = could not find container \"32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23\": container with ID starting with 32cc47a1680d22a6982257050d9f9a4e43288adb5fdb398381deffc759192d23 not found: ID does not exist" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.565586 5023 scope.go:117] "RemoveContainer" containerID="a06917cc4eb3259b00f4408cb0fd8c9174bffe27e8c303edea6ce1d4a62e1f80" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.578966 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.596860 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.606006 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.613659 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.803424 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.808616 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.809076 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="proxy-httpd" containerID="cri-o://ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957" gracePeriod=30 Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.809245 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="sg-core" containerID="cri-o://74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5" gracePeriod=30 Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.809253 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-central-agent" containerID="cri-o://9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f" gracePeriod=30 Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.809339 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-notification-agent" containerID="cri-o://cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d" gracePeriod=30 Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.844071 5023 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.885853 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blss7\" (UniqueName: \"kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7\") pod \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.886304 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts\") pod \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\" (UID: \"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e\") " Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.886916 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" (UID: "8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.894931 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7" (OuterVolumeSpecName: "kube-api-access-blss7") pod "8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" (UID: "8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e"). InnerVolumeSpecName "kube-api-access-blss7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.988561 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blss7\" (UniqueName: \"kubernetes.io/projected/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-kube-api-access-blss7\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:16 crc kubenswrapper[5023]: I0219 08:32:16.988608 5023 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.496892 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e096615-0d85-458f-8c45-29eddee745d7" path="/var/lib/kubelet/pods/3e096615-0d85-458f-8c45-29eddee745d7/volumes" Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.497471 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6850e909-6998-4241-b3da-1af27d5663b6" path="/var/lib/kubelet/pods/6850e909-6998-4241-b3da-1af27d5663b6/volumes" Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558748 5023 generic.go:334] "Generic (PLEG): container finished" podID="78ff582c-22eb-4737-985d-51a02b38dcca" containerID="ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957" exitCode=0 Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558791 5023 generic.go:334] "Generic (PLEG): container finished" podID="78ff582c-22eb-4737-985d-51a02b38dcca" containerID="74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5" exitCode=2 Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558803 5023 generic.go:334] "Generic (PLEG): container finished" podID="78ff582c-22eb-4737-985d-51a02b38dcca" containerID="9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f" exitCode=0 Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558807 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerDied","Data":"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957"} Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558851 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerDied","Data":"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5"} Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.558867 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerDied","Data":"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f"} Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.561199 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" event={"ID":"8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e","Type":"ContainerDied","Data":"34fb61e1578c6863e91a23f6bbdfca990208e120997af37d621bd97a25d5136a"} Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.561225 5023 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fb61e1578c6863e91a23f6bbdfca990208e120997af37d621bd97a25d5136a" Feb 19 08:32:17 crc kubenswrapper[5023]: I0219 08:32:17.561343 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercbe6-account-delete-hlv9h" Feb 19 08:32:17 crc kubenswrapper[5023]: E0219 08:32:17.587090 5023 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5aafb8_f642_4ed9_b4f3_92b4c6c9c71e.slice\": RecentStats: unable to find data in memory cache]" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.355207 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526055 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526342 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526363 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526413 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5xkn\" (UniqueName: \"kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526441 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526458 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526529 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.526555 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle\") pod \"78ff582c-22eb-4737-985d-51a02b38dcca\" (UID: \"78ff582c-22eb-4737-985d-51a02b38dcca\") " Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.527138 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.527333 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.546710 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn" (OuterVolumeSpecName: "kube-api-access-l5xkn") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "kube-api-access-l5xkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.553531 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts" (OuterVolumeSpecName: "scripts") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.556109 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.571738 5023 generic.go:334] "Generic (PLEG): container finished" podID="78ff582c-22eb-4737-985d-51a02b38dcca" containerID="cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d" exitCode=0 Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.571805 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerDied","Data":"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d"} Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.571837 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"78ff582c-22eb-4737-985d-51a02b38dcca","Type":"ContainerDied","Data":"7fd7987e0810f9cb6f3072b93ff959acc8fa81b5d47cefd8a399f87fa4c652cc"} Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.571859 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.571855 5023 scope.go:117] "RemoveContainer" containerID="ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.576446 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.598719 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629317 5023 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629359 5023 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ff582c-22eb-4737-985d-51a02b38dcca-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629372 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5xkn\" (UniqueName: \"kubernetes.io/projected/78ff582c-22eb-4737-985d-51a02b38dcca-kube-api-access-l5xkn\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629384 5023 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629397 5023 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629406 5023 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.629414 5023 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.630369 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data" (OuterVolumeSpecName: "config-data") pod "78ff582c-22eb-4737-985d-51a02b38dcca" (UID: "78ff582c-22eb-4737-985d-51a02b38dcca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.645877 5023 scope.go:117] "RemoveContainer" containerID="74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.662147 5023 scope.go:117] "RemoveContainer" containerID="cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.686797 5023 scope.go:117] "RemoveContainer" containerID="9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.707395 5023 scope.go:117] "RemoveContainer" containerID="ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.708601 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957\": container with ID starting with ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957 not found: ID does not exist" containerID="ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.708699 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957"} err="failed to get container status \"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957\": rpc error: code = NotFound desc = could not find container \"ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957\": container with ID starting with ed9b2ee89a355f2ed937359e8da517aec084194c40712b51a53ecad2b5cbe957 not found: ID does not exist" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.708721 5023 scope.go:117] "RemoveContainer" containerID="74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.709188 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5\": container with ID starting with 74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5 not found: ID does not exist" containerID="74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.709217 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5"} err="failed to get container status \"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5\": rpc error: code = NotFound desc = could not find container \"74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5\": container with ID starting with 74410454a57a7502853b063fe1f3bd2092f750989e251c119869093e0d8617c5 not found: ID does not exist" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.709232 5023 scope.go:117] "RemoveContainer" containerID="cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.709558 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d\": container with ID starting with cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d not found: ID does not exist" containerID="cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.709731 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d"} err="failed to get container status \"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d\": rpc error: code = NotFound desc = could not find container \"cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d\": container with ID starting with cf0c6c51baac4e96b8e754d11d589887befe46bddae8c654b0e60a5cf67e380d not found: ID does not exist" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.709857 5023 scope.go:117] "RemoveContainer" containerID="9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.710270 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f\": container with ID starting with 9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f not found: ID does not exist" containerID="9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.710290 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f"} err="failed to get container status \"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f\": rpc error: code = NotFound desc = could not find container \"9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f\": container with ID starting with 9eb06c948a47e6050c6611f63ee168bbbe66d415173d98f52749b3509030d27f not found: ID does not exist" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.731249 5023 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff582c-22eb-4737-985d-51a02b38dcca-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.912432 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.921623 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.940966 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941289 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="sg-core" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941306 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="sg-core" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941321 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-central-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941328 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-central-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941337 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="proxy-httpd" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941344 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="proxy-httpd" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941359 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e096615-0d85-458f-8c45-29eddee745d7" containerName="watcher-decision-engine" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941365 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e096615-0d85-458f-8c45-29eddee745d7" containerName="watcher-decision-engine" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941378 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-notification-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941386 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-notification-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941399 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6850e909-6998-4241-b3da-1af27d5663b6" containerName="watcher-applier" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941406 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="6850e909-6998-4241-b3da-1af27d5663b6" containerName="watcher-applier" Feb 19 08:32:18 crc kubenswrapper[5023]: E0219 08:32:18.941417 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" containerName="mariadb-account-delete" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941423 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" containerName="mariadb-account-delete" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941564 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="6850e909-6998-4241-b3da-1af27d5663b6" containerName="watcher-applier" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941573 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-central-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941582 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e096615-0d85-458f-8c45-29eddee745d7" containerName="watcher-decision-engine" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941592 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="sg-core" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941601 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" containerName="mariadb-account-delete" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941607 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="ceilometer-notification-agent" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.941620 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" containerName="proxy-httpd" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.943135 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.945475 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.945715 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.945997 5023 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 19 08:32:18 crc kubenswrapper[5023]: I0219 08:32:18.962919 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.026335 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7vkvp"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.033153 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7vkvp"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.046577 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.048961 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-run-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049010 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049132 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-log-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049242 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049304 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-scripts\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049379 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljwq\" (UniqueName: \"kubernetes.io/projected/8a3f37b2-4a57-46ca-91fa-013a146747ef-kube-api-access-jljwq\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049439 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-config-data\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.049677 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.054699 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-cbe6-account-create-update-k9wjp"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.061601 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchercbe6-account-delete-hlv9h"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.074968 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchercbe6-account-delete-hlv9h"] Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.151640 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-run-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.151975 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152025 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-run-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152047 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-log-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152221 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152327 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-scripts\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152411 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljwq\" (UniqueName: \"kubernetes.io/projected/8a3f37b2-4a57-46ca-91fa-013a146747ef-kube-api-access-jljwq\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152477 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-config-data\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.152576 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.153089 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a3f37b2-4a57-46ca-91fa-013a146747ef-log-httpd\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.156079 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.157071 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.157981 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.163183 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-scripts\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.169568 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3f37b2-4a57-46ca-91fa-013a146747ef-config-data\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.182448 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljwq\" (UniqueName: \"kubernetes.io/projected/8a3f37b2-4a57-46ca-91fa-013a146747ef-kube-api-access-jljwq\") pod \"ceilometer-0\" (UID: \"8a3f37b2-4a57-46ca-91fa-013a146747ef\") " pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.289213 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.477321 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:32:19 crc kubenswrapper[5023]: E0219 08:32:19.478515 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.519778 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d29ed7-687b-47bf-bc4b-b2466e0cb913" path="/var/lib/kubelet/pods/16d29ed7-687b-47bf-bc4b-b2466e0cb913/volumes" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.520510 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d75349-c69a-4f53-938a-8d70833ee4d1" path="/var/lib/kubelet/pods/48d75349-c69a-4f53-938a-8d70833ee4d1/volumes" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.521035 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ff582c-22eb-4737-985d-51a02b38dcca" path="/var/lib/kubelet/pods/78ff582c-22eb-4737-985d-51a02b38dcca/volumes" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.522248 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e" path="/var/lib/kubelet/pods/8b5aafb8-f642-4ed9-b4f3-92b4c6c9c71e/volumes" Feb 19 08:32:19 crc kubenswrapper[5023]: I0219 08:32:19.631656 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 19 08:32:20 crc kubenswrapper[5023]: I0219 08:32:20.638371 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8a3f37b2-4a57-46ca-91fa-013a146747ef","Type":"ContainerStarted","Data":"1ab768edb3fef452b98f326043c2e1ea3628ba6c6070d9b9fe2d55e54bd0fc29"} Feb 19 08:32:20 crc kubenswrapper[5023]: I0219 08:32:20.638926 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8a3f37b2-4a57-46ca-91fa-013a146747ef","Type":"ContainerStarted","Data":"c7294a35c82daf2e7506fc87eaaa354a8168a80e3a08ff881ae606ae65421a76"} Feb 19 08:32:21 crc kubenswrapper[5023]: I0219 08:32:21.664232 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8a3f37b2-4a57-46ca-91fa-013a146747ef","Type":"ContainerStarted","Data":"968ce24a47feb296a64107971e755eec6891a9cd5eee6a2900e9b8311acbfee9"} Feb 19 08:32:21 crc kubenswrapper[5023]: I0219 08:32:21.664704 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8a3f37b2-4a57-46ca-91fa-013a146747ef","Type":"ContainerStarted","Data":"fcc947be587172e45cc7331dbc11cf5b08b0e87cd5d1fd176105e5f1e1110759"} Feb 19 08:32:23 crc kubenswrapper[5023]: I0219 08:32:23.694483 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8a3f37b2-4a57-46ca-91fa-013a146747ef","Type":"ContainerStarted","Data":"a1a1dc481c95419c986bde9ebbb2d52a5ef95e889ce498a62ccbb1aad2eaf7d4"} Feb 19 08:32:23 crc kubenswrapper[5023]: I0219 08:32:23.695918 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:23 crc kubenswrapper[5023]: I0219 08:32:23.715587 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.427640963 podStartE2EDuration="5.715550791s" podCreationTimestamp="2026-02-19 08:32:18 +0000 UTC" firstStartedPulling="2026-02-19 08:32:19.649783941 +0000 UTC m=+1897.306902889" lastFinishedPulling="2026-02-19 08:32:22.937693759 +0000 UTC m=+1900.594812717" observedRunningTime="2026-02-19 08:32:23.712926551 +0000 UTC m=+1901.370045499" watchObservedRunningTime="2026-02-19 08:32:23.715550791 +0000 UTC m=+1901.372669739" Feb 19 08:32:31 crc kubenswrapper[5023]: I0219 08:32:31.477320 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:32:31 crc kubenswrapper[5023]: E0219 08:32:31.478094 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.192325 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7gwnc/must-gather-mf8lc"] Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.194673 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.197966 5023 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7gwnc"/"default-dockercfg-p9dv6" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.198214 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7gwnc"/"openshift-service-ca.crt" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.198347 5023 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7gwnc"/"kube-root-ca.crt" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.201441 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7gwnc/must-gather-mf8lc"] Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.328394 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.328453 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfd6\" (UniqueName: \"kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.430262 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.430325 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfd6\" (UniqueName: \"kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.430861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.452487 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfd6\" (UniqueName: \"kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6\") pod \"must-gather-mf8lc\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:42 crc kubenswrapper[5023]: I0219 08:32:42.512031 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:32:43 crc kubenswrapper[5023]: I0219 08:32:43.058015 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7gwnc/must-gather-mf8lc"] Feb 19 08:32:43 crc kubenswrapper[5023]: I0219 08:32:43.877219 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" event={"ID":"0483dcab-4d27-47b1-b98a-26e9535c123e","Type":"ContainerStarted","Data":"fa7a7282634ee675ee8aa719d367be22b24cd8a5c04b7f350b7d98ac30dfcaea"} Feb 19 08:32:44 crc kubenswrapper[5023]: I0219 08:32:44.496691 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:32:44 crc kubenswrapper[5023]: E0219 08:32:44.496975 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:32:49 crc kubenswrapper[5023]: I0219 08:32:49.297277 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 19 08:32:49 crc kubenswrapper[5023]: I0219 08:32:49.939342 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" event={"ID":"0483dcab-4d27-47b1-b98a-26e9535c123e","Type":"ContainerStarted","Data":"d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4"} Feb 19 08:32:49 crc kubenswrapper[5023]: I0219 08:32:49.939711 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" event={"ID":"0483dcab-4d27-47b1-b98a-26e9535c123e","Type":"ContainerStarted","Data":"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c"} Feb 19 08:32:49 crc kubenswrapper[5023]: I0219 08:32:49.958066 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" podStartSLOduration=1.6018459269999998 podStartE2EDuration="7.958047387s" podCreationTimestamp="2026-02-19 08:32:42 +0000 UTC" firstStartedPulling="2026-02-19 08:32:43.064866055 +0000 UTC m=+1920.721985003" lastFinishedPulling="2026-02-19 08:32:49.421067505 +0000 UTC m=+1927.078186463" observedRunningTime="2026-02-19 08:32:49.955329315 +0000 UTC m=+1927.612448263" watchObservedRunningTime="2026-02-19 08:32:49.958047387 +0000 UTC m=+1927.615166335" Feb 19 08:32:59 crc kubenswrapper[5023]: I0219 08:32:59.477157 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:32:59 crc kubenswrapper[5023]: E0219 08:32:59.478048 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:33:14 crc kubenswrapper[5023]: I0219 08:33:14.476982 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:33:14 crc kubenswrapper[5023]: E0219 08:33:14.477917 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:33:26 crc kubenswrapper[5023]: I0219 08:33:26.476610 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:33:26 crc kubenswrapper[5023]: E0219 08:33:26.477412 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.215806 5023 scope.go:117] "RemoveContainer" containerID="8702df5d146cfaf8b2cec6c1fa151821e06f0fc0b9dbf78db329d0d09598d17b" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.235453 5023 scope.go:117] "RemoveContainer" containerID="fa473ab2a333fe0b734e89e18c488362343404813d0473f26c9811b1b2af5fc1" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.274549 5023 scope.go:117] "RemoveContainer" containerID="a6fcab4a38b5cd07da5e0ac3068c232d3d6d3117df2ce46a671b96f7209d4c9a" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.319604 5023 scope.go:117] "RemoveContainer" containerID="56259fe60dda93b6b16493089c1647dc830d702813b9fb73b3be3aa72b9fb691" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.375918 5023 scope.go:117] "RemoveContainer" containerID="c90fbe4c56c02b53dca6b34d5deb49a97eabb784d86ed7a8a643a5c47a578bfc" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.402049 5023 scope.go:117] "RemoveContainer" containerID="3c0fdae20f71309bbcc68e65c7e6230a9b0c3259bef73a5ec32e7f8dcb71096f" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.423452 5023 scope.go:117] "RemoveContainer" containerID="ebb046dc0b1d4244ced83182e8ee78a3ee4c594f8b94f6cfba1d6ba5e9822b78" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.451067 5023 scope.go:117] "RemoveContainer" containerID="1c3a8891db4cd46509c7f3c80a4048b9e263f3ebfead53f37d3e08bf0d4e04e4" Feb 19 08:33:31 crc kubenswrapper[5023]: I0219 08:33:31.469765 5023 scope.go:117] "RemoveContainer" containerID="460da3be2bde283181b52ea782bfe5f4793ee856e07736c0968cdc3a52d4313d" Feb 19 08:33:40 crc kubenswrapper[5023]: I0219 08:33:40.477140 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:33:40 crc kubenswrapper[5023]: E0219 08:33:40.477883 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:33:51 crc kubenswrapper[5023]: I0219 08:33:51.477690 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:33:52 crc kubenswrapper[5023]: I0219 08:33:52.442698 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b"} Feb 19 08:33:58 crc kubenswrapper[5023]: I0219 08:33:58.924992 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/util/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.076397 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/util/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.134568 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/pull/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.136248 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/pull/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.393468 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/pull/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.542324 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/util/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.640334 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6eb0b73a879683c0aacb41a8d173d48c41a6846656ea82cb40e2c68f29kx2qv_5d838e58-d185-465a-8999-7e2c9c572719/extract/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.800492 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/util/0.log" Feb 19 08:33:59 crc kubenswrapper[5023]: I0219 08:33:59.996653 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/pull/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.035731 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/util/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.044113 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/pull/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.204299 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/util/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.213667 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/pull/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.235282 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8bf62e6306d93197ac345f51045a3ba87933fc94cc13d2289521f914b49p42m_b44022ae-c88d-4656-a82a-bb5cbd80226a/extract/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.750857 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ppgdp_cdfff2ca-6dc1-4850-806d-7fb9195e276a/manager/0.log" Feb 19 08:34:00 crc kubenswrapper[5023]: I0219 08:34:00.991800 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hsz4t_05d6abf5-ddc2-460e-8b10-252292257fdd/manager/0.log" Feb 19 08:34:01 crc kubenswrapper[5023]: I0219 08:34:01.182396 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-s74tq_a396f869-bade-4ff1-9031-ac899d4d6ed2/manager/0.log" Feb 19 08:34:01 crc kubenswrapper[5023]: I0219 08:34:01.423685 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-lfj5q_f96cd850-d719-444c-8015-fdffb335df27/manager/0.log" Feb 19 08:34:01 crc kubenswrapper[5023]: I0219 08:34:01.898255 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-txbbh_61b3e902-e458-49b8-8924-fd607e116c1f/manager/0.log" Feb 19 08:34:01 crc kubenswrapper[5023]: I0219 08:34:01.916728 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jvqln_9719932b-2c04-47a0-97b8-492d4a5d297c/manager/0.log" Feb 19 08:34:02 crc kubenswrapper[5023]: I0219 08:34:02.122742 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wgs6h_b73d7256-9139-4cbd-b7a7-7b4b3852aafb/manager/0.log" Feb 19 08:34:02 crc kubenswrapper[5023]: I0219 08:34:02.381661 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-58ml6_e61f8f71-02fe-448d-a0ef-1d2290d558b1/manager/0.log" Feb 19 08:34:02 crc kubenswrapper[5023]: I0219 08:34:02.470902 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-9zksh_aa77cbbd-b043-472e-ba08-07c42e16d326/manager/0.log" Feb 19 08:34:02 crc kubenswrapper[5023]: I0219 08:34:02.902084 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-m2bd5_8d91d728-e5b6-4f5e-81ad-158b96069d64/manager/0.log" Feb 19 08:34:03 crc kubenswrapper[5023]: I0219 08:34:03.118890 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-9rxg5_17f2a3cb-6233-4f7f-b530-fb662f1aba34/manager/0.log" Feb 19 08:34:03 crc kubenswrapper[5023]: I0219 08:34:03.211697 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-zwc8v_e9e36838-6d27-4e7e-9619-e3cd7b304426/manager/0.log" Feb 19 08:34:03 crc kubenswrapper[5023]: I0219 08:34:03.594454 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c6qxl9_7fc6e4db-1bd8-42ff-a64e-c4f356f80806/manager/0.log" Feb 19 08:34:04 crc kubenswrapper[5023]: I0219 08:34:04.477210 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7v4rv_ca2cee23-359d-4810-8ded-0ce03a1c4add/registry-server/0.log" Feb 19 08:34:04 crc kubenswrapper[5023]: I0219 08:34:04.632802 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-dfkgq_314f00ab-6012-4663-b265-2df54d81511b/manager/0.log" Feb 19 08:34:04 crc kubenswrapper[5023]: I0219 08:34:04.643045 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-c8dc87cd9-xrk5c_0c7247ae-fc2e-42b0-8333-33093c37978e/manager/0.log" Feb 19 08:34:04 crc kubenswrapper[5023]: I0219 08:34:04.917065 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nsz2f_6e8405b6-2fae-404e-87c3-635d94cc4376/operator/0.log" Feb 19 08:34:04 crc kubenswrapper[5023]: I0219 08:34:04.932354 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-jdlhp_d2d4b854-9e89-4f1c-b8ce-c3ec8a25ff0d/manager/0.log" Feb 19 08:34:05 crc kubenswrapper[5023]: I0219 08:34:05.143561 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-9wcz4_2d806bd1-886e-4643-a98e-856c74c803aa/manager/0.log" Feb 19 08:34:05 crc kubenswrapper[5023]: I0219 08:34:05.176020 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-kjbpp_486c209b-21d4-45cb-9b95-cb8d27df2ad1/manager/0.log" Feb 19 08:34:05 crc kubenswrapper[5023]: I0219 08:34:05.371640 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-shhzj_7b5a2508-a1ef-40f4-92c3-91aae50788ba/manager/0.log" Feb 19 08:34:05 crc kubenswrapper[5023]: I0219 08:34:05.752778 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-ks9rd_b448df69-64f6-4ba5-9c1d-60d1ca582acb/manager/0.log" Feb 19 08:34:05 crc kubenswrapper[5023]: I0219 08:34:05.825955 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-n6z9c_47450b8f-2238-4432-9048-92cd1bb2a290/registry-server/0.log" Feb 19 08:34:06 crc kubenswrapper[5023]: I0219 08:34:06.378081 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cc98bc54-8h2jk_f13b16cf-c804-4498-be33-744ccaa1c8eb/manager/0.log" Feb 19 08:34:07 crc kubenswrapper[5023]: I0219 08:34:07.789310 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-5xq6x_677afd79-73b0-45db-a513-6b77dfb09992/manager/0.log" Feb 19 08:34:28 crc kubenswrapper[5023]: I0219 08:34:28.351854 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kpsw6_686daed9-9edb-4929-b686-ed1611d57ca3/control-plane-machine-set-operator/0.log" Feb 19 08:34:28 crc kubenswrapper[5023]: I0219 08:34:28.580245 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xsnwk_78a61028-ddc3-4560-8fe7-83deff82f5d7/kube-rbac-proxy/0.log" Feb 19 08:34:28 crc kubenswrapper[5023]: I0219 08:34:28.633199 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xsnwk_78a61028-ddc3-4560-8fe7-83deff82f5d7/machine-api-operator/0.log" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.638008 5023 scope.go:117] "RemoveContainer" containerID="7e14e26865ffb668305c05b7d2a3bc099c0054ac9cdb0a6099102dd5c34fc4b5" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.675010 5023 scope.go:117] "RemoveContainer" containerID="dd09a79668cf2b0b8cb68a21f0b224fed21ca57c397253ac34e9b7d720da5551" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.700763 5023 scope.go:117] "RemoveContainer" containerID="10b2a6fbc849751307ba9c4b3b9f5da0e16e6be5b6d16375a4bf887b5370fc98" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.741795 5023 scope.go:117] "RemoveContainer" containerID="5eb23a973606a101b80aa82eacd73943737a6e06469d50aae3d51d3770e76166" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.758334 5023 scope.go:117] "RemoveContainer" containerID="b4b3dde1c71fc77cfa0ce798bf09feec907fa44aee39260fe24eebfc250874e9" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.791839 5023 scope.go:117] "RemoveContainer" containerID="337ffc8dd95b077005dc7dac668356effa8273025493f72a8072751bcbd5e3dd" Feb 19 08:34:31 crc kubenswrapper[5023]: I0219 08:34:31.826455 5023 scope.go:117] "RemoveContainer" containerID="32b3c39dd947cb49c6282fce695f830fa80ea694759209dee4a48d02d235e5f3" Feb 19 08:34:41 crc kubenswrapper[5023]: I0219 08:34:41.925599 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-7sjkx_872749de-64b7-4a74-a8d9-70bb7d41b496/cert-manager-controller/0.log" Feb 19 08:34:42 crc kubenswrapper[5023]: I0219 08:34:42.062297 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4qkq4_9d8e36c4-29f0-4acb-b3c2-8fa44738751a/cert-manager-cainjector/0.log" Feb 19 08:34:42 crc kubenswrapper[5023]: I0219 08:34:42.155986 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8qvzr_b0363881-ec76-4013-8589-43bd4b142716/cert-manager-webhook/0.log" Feb 19 08:34:54 crc kubenswrapper[5023]: I0219 08:34:54.899568 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-8jh2n_cc7ad06c-3614-4f0d-88ad-1d743499fc9c/nmstate-console-plugin/0.log" Feb 19 08:34:55 crc kubenswrapper[5023]: I0219 08:34:55.031012 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-f9rlh_d9ec14c0-957a-473e-9c95-aa0ced5b523c/nmstate-handler/0.log" Feb 19 08:34:55 crc kubenswrapper[5023]: I0219 08:34:55.079889 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4m9fh_abb296fe-0769-478a-ac52-38a1610a8ca8/kube-rbac-proxy/0.log" Feb 19 08:34:55 crc kubenswrapper[5023]: I0219 08:34:55.101746 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4m9fh_abb296fe-0769-478a-ac52-38a1610a8ca8/nmstate-metrics/0.log" Feb 19 08:34:55 crc kubenswrapper[5023]: I0219 08:34:55.280576 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-cgwg2_6180e8c4-c97c-411e-b3a1-2bac8b0afed2/nmstate-operator/0.log" Feb 19 08:34:55 crc kubenswrapper[5023]: I0219 08:34:55.316122 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-9hdkv_78d642b7-0914-4e8b-840b-7fc5454ddab6/nmstate-webhook/0.log" Feb 19 08:35:10 crc kubenswrapper[5023]: I0219 08:35:10.731481 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qgn84_9ac16bf5-97d2-478b-a915-9f9919ecd59e/prometheus-operator/0.log" Feb 19 08:35:10 crc kubenswrapper[5023]: I0219 08:35:10.910304 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1/prometheus-operator-admission-webhook/0.log" Feb 19 08:35:10 crc kubenswrapper[5023]: I0219 08:35:10.976042 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_4b26147b-3c73-4b0d-8810-38d893b67b6b/prometheus-operator-admission-webhook/0.log" Feb 19 08:35:11 crc kubenswrapper[5023]: I0219 08:35:11.154439 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jghsx_abccc29c-4404-4fbf-abec-9046e05e6bc3/operator/0.log" Feb 19 08:35:11 crc kubenswrapper[5023]: I0219 08:35:11.207410 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ztvtc_817dfdb3-899e-49c9-9a8b-73f8c3e80c52/observability-ui-dashboards/0.log" Feb 19 08:35:11 crc kubenswrapper[5023]: I0219 08:35:11.333015 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vg2dl_49bbb335-22f1-432d-8508-9575cf6006ac/perses-operator/0.log" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.035887 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-zmtgl"] Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.041293 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-zmtgl"] Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.158420 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.162383 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.173061 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.303692 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.303794 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.303841 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8psk4\" (UniqueName: \"kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.405453 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.405548 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.405589 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8psk4\" (UniqueName: \"kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.405927 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.406051 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.433978 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8psk4\" (UniqueName: \"kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4\") pod \"redhat-marketplace-cpcnk\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.483955 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.486611 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bef69f-54ef-460f-aa22-3ac64259b621" path="/var/lib/kubelet/pods/02bef69f-54ef-460f-aa22-3ac64259b621/volumes" Feb 19 08:35:21 crc kubenswrapper[5023]: I0219 08:35:21.988760 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.166021 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerStarted","Data":"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5"} Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.166062 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerStarted","Data":"240aca506c9ab1628d6de4917ea5223f6df6272545114692fdb298968eb656ea"} Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.552325 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.554197 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.577192 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.628377 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.628608 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.628730 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2zh\" (UniqueName: \"kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.729332 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.729420 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2zh\" (UniqueName: \"kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.729517 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.729861 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.729939 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.755190 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2zh\" (UniqueName: \"kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh\") pod \"redhat-operators-d9fv8\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:22 crc kubenswrapper[5023]: I0219 08:35:22.871835 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:23 crc kubenswrapper[5023]: I0219 08:35:23.176998 5023 generic.go:334] "Generic (PLEG): container finished" podID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerID="88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5" exitCode=0 Feb 19 08:35:23 crc kubenswrapper[5023]: I0219 08:35:23.177231 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerDied","Data":"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5"} Feb 19 08:35:23 crc kubenswrapper[5023]: I0219 08:35:23.356097 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:23 crc kubenswrapper[5023]: W0219 08:35:23.358872 5023 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e3501c_7d69_4357_8507_a819ecf777e3.slice/crio-b5a49fcc4b795ec32352f37ed7490f0d5f41a2108cc6f343c2707b918cf9fd9d WatchSource:0}: Error finding container b5a49fcc4b795ec32352f37ed7490f0d5f41a2108cc6f343c2707b918cf9fd9d: Status 404 returned error can't find the container with id b5a49fcc4b795ec32352f37ed7490f0d5f41a2108cc6f343c2707b918cf9fd9d Feb 19 08:35:24 crc kubenswrapper[5023]: I0219 08:35:24.186368 5023 generic.go:334] "Generic (PLEG): container finished" podID="26e3501c-7d69-4357-8507-a819ecf777e3" containerID="bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d" exitCode=0 Feb 19 08:35:24 crc kubenswrapper[5023]: I0219 08:35:24.186418 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerDied","Data":"bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d"} Feb 19 08:35:24 crc kubenswrapper[5023]: I0219 08:35:24.186718 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerStarted","Data":"b5a49fcc4b795ec32352f37ed7490f0d5f41a2108cc6f343c2707b918cf9fd9d"} Feb 19 08:35:24 crc kubenswrapper[5023]: I0219 08:35:24.188757 5023 generic.go:334] "Generic (PLEG): container finished" podID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerID="1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244" exitCode=0 Feb 19 08:35:24 crc kubenswrapper[5023]: I0219 08:35:24.188791 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerDied","Data":"1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244"} Feb 19 08:35:25 crc kubenswrapper[5023]: I0219 08:35:25.199750 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerStarted","Data":"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c"} Feb 19 08:35:25 crc kubenswrapper[5023]: I0219 08:35:25.201797 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerStarted","Data":"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42"} Feb 19 08:35:25 crc kubenswrapper[5023]: I0219 08:35:25.219245 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cpcnk" podStartSLOduration=2.774933285 podStartE2EDuration="4.219228352s" podCreationTimestamp="2026-02-19 08:35:21 +0000 UTC" firstStartedPulling="2026-02-19 08:35:23.178864636 +0000 UTC m=+2080.835983584" lastFinishedPulling="2026-02-19 08:35:24.623159703 +0000 UTC m=+2082.280278651" observedRunningTime="2026-02-19 08:35:25.218224275 +0000 UTC m=+2082.875343223" watchObservedRunningTime="2026-02-19 08:35:25.219228352 +0000 UTC m=+2082.876347300" Feb 19 08:35:26 crc kubenswrapper[5023]: I0219 08:35:26.211905 5023 generic.go:334] "Generic (PLEG): container finished" podID="26e3501c-7d69-4357-8507-a819ecf777e3" containerID="080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42" exitCode=0 Feb 19 08:35:26 crc kubenswrapper[5023]: I0219 08:35:26.213595 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerDied","Data":"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42"} Feb 19 08:35:26 crc kubenswrapper[5023]: I0219 08:35:26.788550 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-l6q57_52cb1a3f-622d-4b75-a16b-05a1b932eeeb/kube-rbac-proxy/0.log" Feb 19 08:35:26 crc kubenswrapper[5023]: I0219 08:35:26.985542 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-l6q57_52cb1a3f-622d-4b75-a16b-05a1b932eeeb/controller/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.120905 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-frr-files/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.221127 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerStarted","Data":"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8"} Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.241039 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9fv8" podStartSLOduration=2.721238432 podStartE2EDuration="5.241022062s" podCreationTimestamp="2026-02-19 08:35:22 +0000 UTC" firstStartedPulling="2026-02-19 08:35:24.187727628 +0000 UTC m=+2081.844846576" lastFinishedPulling="2026-02-19 08:35:26.707511258 +0000 UTC m=+2084.364630206" observedRunningTime="2026-02-19 08:35:27.239518232 +0000 UTC m=+2084.896637180" watchObservedRunningTime="2026-02-19 08:35:27.241022062 +0000 UTC m=+2084.898141010" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.316962 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-frr-files/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.370961 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-metrics/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.389752 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-reloader/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.429288 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-reloader/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.649125 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-metrics/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.652914 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-frr-files/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.692519 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-reloader/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.787916 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-metrics/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.934947 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-frr-files/0.log" Feb 19 08:35:27 crc kubenswrapper[5023]: I0219 08:35:27.968171 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-metrics/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.024494 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/cp-reloader/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.032399 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/controller/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.244280 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/kube-rbac-proxy/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.244459 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/frr-metrics/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.311580 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/kube-rbac-proxy-frr/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.439913 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/reloader/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.650974 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-fm5ph_33eb4f2b-7821-4e6b-a69e-2cda1a6489e8/frr-k8s-webhook-server/0.log" Feb 19 08:35:28 crc kubenswrapper[5023]: I0219 08:35:28.818688 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744474f4f9-cg2wm_6afd6128-1c17-4490-8b98-52b684318f65/manager/0.log" Feb 19 08:35:29 crc kubenswrapper[5023]: I0219 08:35:29.116492 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-674976f6cc-f4mpk_acdff3eb-f5d2-48f5-bef3-08606374dc4d/webhook-server/0.log" Feb 19 08:35:29 crc kubenswrapper[5023]: I0219 08:35:29.268715 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sffgs_51b4e594-f586-4108-ad83-8beb7cba09ca/frr/0.log" Feb 19 08:35:29 crc kubenswrapper[5023]: I0219 08:35:29.346399 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tsc67_bfc832d4-eeff-4559-b058-2599bb2c9baa/kube-rbac-proxy/0.log" Feb 19 08:35:29 crc kubenswrapper[5023]: I0219 08:35:29.872432 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tsc67_bfc832d4-eeff-4559-b058-2599bb2c9baa/speaker/0.log" Feb 19 08:35:31 crc kubenswrapper[5023]: I0219 08:35:31.494967 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:31 crc kubenswrapper[5023]: I0219 08:35:31.495017 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:31 crc kubenswrapper[5023]: I0219 08:35:31.534894 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:31 crc kubenswrapper[5023]: I0219 08:35:31.948706 5023 scope.go:117] "RemoveContainer" containerID="c5fff8d37df38abf87c87fc576230474be30d7b3e9a19fa7872e2e8e14d5f403" Feb 19 08:35:31 crc kubenswrapper[5023]: I0219 08:35:31.973793 5023 scope.go:117] "RemoveContainer" containerID="b6f5fd09fead194263061f8590deedfaf550754478f2f961fb628d72f4c78862" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.009649 5023 scope.go:117] "RemoveContainer" containerID="6e2851ee52a37ae4aba5850a75c342ff8d2df2f5e120b0689786d93d20788285" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.052505 5023 scope.go:117] "RemoveContainer" containerID="f6bd80e637fed7d37712e1e1418c53837c3193dcf87b694fdfa3ef2f1292cb5b" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.122017 5023 scope.go:117] "RemoveContainer" containerID="221de5c78ff65ec411f137c5a011e175822e722191e975380d26b15351192f50" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.156837 5023 scope.go:117] "RemoveContainer" containerID="15ec93af9199004e1a29fd88407a18b01d1a7da85d00771997ffe3a26966ae6b" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.333004 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.871980 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.872114 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:32 crc kubenswrapper[5023]: I0219 08:35:32.917347 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:33 crc kubenswrapper[5023]: I0219 08:35:33.351685 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:35 crc kubenswrapper[5023]: I0219 08:35:35.163723 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.310460 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d9fv8" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="registry-server" containerID="cri-o://b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8" gracePeriod=2 Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.342105 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.342397 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cpcnk" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="registry-server" containerID="cri-o://093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c" gracePeriod=2 Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.795544 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.801830 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.945591 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities\") pod \"26e3501c-7d69-4357-8507-a819ecf777e3\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946016 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content\") pod \"26e3501c-7d69-4357-8507-a819ecf777e3\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946058 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content\") pod \"c10e0801-688e-4095-8cb1-ae17cbc26115\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946134 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8psk4\" (UniqueName: \"kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4\") pod \"c10e0801-688e-4095-8cb1-ae17cbc26115\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946229 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities\") pod \"c10e0801-688e-4095-8cb1-ae17cbc26115\" (UID: \"c10e0801-688e-4095-8cb1-ae17cbc26115\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946309 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2zh\" (UniqueName: \"kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh\") pod \"26e3501c-7d69-4357-8507-a819ecf777e3\" (UID: \"26e3501c-7d69-4357-8507-a819ecf777e3\") " Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.946483 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities" (OuterVolumeSpecName: "utilities") pod "26e3501c-7d69-4357-8507-a819ecf777e3" (UID: "26e3501c-7d69-4357-8507-a819ecf777e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.947005 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.948910 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities" (OuterVolumeSpecName: "utilities") pod "c10e0801-688e-4095-8cb1-ae17cbc26115" (UID: "c10e0801-688e-4095-8cb1-ae17cbc26115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.952928 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh" (OuterVolumeSpecName: "kube-api-access-qz2zh") pod "26e3501c-7d69-4357-8507-a819ecf777e3" (UID: "26e3501c-7d69-4357-8507-a819ecf777e3"). InnerVolumeSpecName "kube-api-access-qz2zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.953860 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4" (OuterVolumeSpecName: "kube-api-access-8psk4") pod "c10e0801-688e-4095-8cb1-ae17cbc26115" (UID: "c10e0801-688e-4095-8cb1-ae17cbc26115"). InnerVolumeSpecName "kube-api-access-8psk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:35:36 crc kubenswrapper[5023]: I0219 08:35:36.972724 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c10e0801-688e-4095-8cb1-ae17cbc26115" (UID: "c10e0801-688e-4095-8cb1-ae17cbc26115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.048773 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.048817 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2zh\" (UniqueName: \"kubernetes.io/projected/26e3501c-7d69-4357-8507-a819ecf777e3-kube-api-access-qz2zh\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.048831 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10e0801-688e-4095-8cb1-ae17cbc26115-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.048846 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8psk4\" (UniqueName: \"kubernetes.io/projected/c10e0801-688e-4095-8cb1-ae17cbc26115-kube-api-access-8psk4\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.083364 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26e3501c-7d69-4357-8507-a819ecf777e3" (UID: "26e3501c-7d69-4357-8507-a819ecf777e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.150897 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e3501c-7d69-4357-8507-a819ecf777e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.320128 5023 generic.go:334] "Generic (PLEG): container finished" podID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerID="093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c" exitCode=0 Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.320178 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerDied","Data":"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c"} Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.320213 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cpcnk" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.320241 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cpcnk" event={"ID":"c10e0801-688e-4095-8cb1-ae17cbc26115","Type":"ContainerDied","Data":"240aca506c9ab1628d6de4917ea5223f6df6272545114692fdb298968eb656ea"} Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.320265 5023 scope.go:117] "RemoveContainer" containerID="093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.323947 5023 generic.go:334] "Generic (PLEG): container finished" podID="26e3501c-7d69-4357-8507-a819ecf777e3" containerID="b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8" exitCode=0 Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.324010 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerDied","Data":"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8"} Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.324041 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9fv8" event={"ID":"26e3501c-7d69-4357-8507-a819ecf777e3","Type":"ContainerDied","Data":"b5a49fcc4b795ec32352f37ed7490f0d5f41a2108cc6f343c2707b918cf9fd9d"} Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.324110 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9fv8" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.342645 5023 scope.go:117] "RemoveContainer" containerID="1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.369987 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.375419 5023 scope.go:117] "RemoveContainer" containerID="88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.379663 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cpcnk"] Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.387603 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.390496 5023 scope.go:117] "RemoveContainer" containerID="093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.390940 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c\": container with ID starting with 093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c not found: ID does not exist" containerID="093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.390974 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c"} err="failed to get container status \"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c\": rpc error: code = NotFound desc = could not find container \"093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c\": container with ID starting with 093d50f732eb19a1f872763e1d88d6729ad72dd8a6c70e2162414c7e6dd4b40c not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.390994 5023 scope.go:117] "RemoveContainer" containerID="1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.391412 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244\": container with ID starting with 1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244 not found: ID does not exist" containerID="1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.391452 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244"} err="failed to get container status \"1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244\": rpc error: code = NotFound desc = could not find container \"1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244\": container with ID starting with 1d17829aeb1784953fb02642eaf00d46785af6fd42c2e539e92d60c189499244 not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.391484 5023 scope.go:117] "RemoveContainer" containerID="88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.391868 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5\": container with ID starting with 88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5 not found: ID does not exist" containerID="88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.391904 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5"} err="failed to get container status \"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5\": rpc error: code = NotFound desc = could not find container \"88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5\": container with ID starting with 88c654d3f3be864fcddec5f803b3b474753270bd89352274f6d940c0b533d6b5 not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.391919 5023 scope.go:117] "RemoveContainer" containerID="b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.394567 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d9fv8"] Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.406935 5023 scope.go:117] "RemoveContainer" containerID="080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.430166 5023 scope.go:117] "RemoveContainer" containerID="bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.477742 5023 scope.go:117] "RemoveContainer" containerID="b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.478400 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8\": container with ID starting with b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8 not found: ID does not exist" containerID="b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.478430 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8"} err="failed to get container status \"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8\": rpc error: code = NotFound desc = could not find container \"b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8\": container with ID starting with b09114f42ef2cafbdc63826ccd33817b5e8902c1352010d1016076236ab9f6c8 not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.478453 5023 scope.go:117] "RemoveContainer" containerID="080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.478744 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42\": container with ID starting with 080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42 not found: ID does not exist" containerID="080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.478776 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42"} err="failed to get container status \"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42\": rpc error: code = NotFound desc = could not find container \"080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42\": container with ID starting with 080052d9ca0b07ec5fd553de6d706eb52a46f17c8310902d033084abc5b37d42 not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.478796 5023 scope.go:117] "RemoveContainer" containerID="bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d" Feb 19 08:35:37 crc kubenswrapper[5023]: E0219 08:35:37.479341 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d\": container with ID starting with bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d not found: ID does not exist" containerID="bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.479368 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d"} err="failed to get container status \"bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d\": rpc error: code = NotFound desc = could not find container \"bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d\": container with ID starting with bb08dfd3f90bc065c003955ee340bb4e14b5e2356b1a047f9b668da60b09fd9d not found: ID does not exist" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.489270 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" path="/var/lib/kubelet/pods/26e3501c-7d69-4357-8507-a819ecf777e3/volumes" Feb 19 08:35:37 crc kubenswrapper[5023]: I0219 08:35:37.490096 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" path="/var/lib/kubelet/pods/c10e0801-688e-4095-8cb1-ae17cbc26115/volumes" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.006497 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_834506b4-7dc5-4648-8e9f-abdbc041753a/init-config-reloader/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.187911 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_834506b4-7dc5-4648-8e9f-abdbc041753a/init-config-reloader/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.205139 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_834506b4-7dc5-4648-8e9f-abdbc041753a/alertmanager/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.264220 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_834506b4-7dc5-4648-8e9f-abdbc041753a/config-reloader/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.398874 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_8a3f37b2-4a57-46ca-91fa-013a146747ef/ceilometer-central-agent/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.404251 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_8a3f37b2-4a57-46ca-91fa-013a146747ef/ceilometer-notification-agent/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.462705 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_8a3f37b2-4a57-46ca-91fa-013a146747ef/proxy-httpd/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.587166 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_8a3f37b2-4a57-46ca-91fa-013a146747ef/sg-core/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.794380 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-747f4cf75-wlbr2_dc6ddf02-3388-47f8-a46e-5528afaa1d4f/keystone-api/0.log" Feb 19 08:35:55 crc kubenswrapper[5023]: I0219 08:35:55.877204 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_ba186aeb-8303-4be0-b6a1-ba2b8de453a5/kube-state-metrics/0.log" Feb 19 08:35:56 crc kubenswrapper[5023]: I0219 08:35:56.200656 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_36b7f388-e73a-4206-bc50-93365c2e8515/mysql-bootstrap/0.log" Feb 19 08:35:56 crc kubenswrapper[5023]: I0219 08:35:56.421339 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_36b7f388-e73a-4206-bc50-93365c2e8515/mysql-bootstrap/0.log" Feb 19 08:35:56 crc kubenswrapper[5023]: I0219 08:35:56.467609 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_36b7f388-e73a-4206-bc50-93365c2e8515/galera/0.log" Feb 19 08:35:56 crc kubenswrapper[5023]: I0219 08:35:56.664309 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_94aa582c-4929-4dcc-9de1-083027faf8b1/openstackclient/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.022946 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_7b0233d3-76a4-4e22-b584-b5ccdc1d82cc/init-config-reloader/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.435382 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_7b0233d3-76a4-4e22-b584-b5ccdc1d82cc/init-config-reloader/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.457845 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_7b0233d3-76a4-4e22-b584-b5ccdc1d82cc/config-reloader/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.459753 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_7b0233d3-76a4-4e22-b584-b5ccdc1d82cc/prometheus/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.700944 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_ecf2c85d-9255-40bd-ac78-4165403c1754/setup-container/0.log" Feb 19 08:35:57 crc kubenswrapper[5023]: I0219 08:35:57.967556 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_7b0233d3-76a4-4e22-b584-b5ccdc1d82cc/thanos-sidecar/0.log" Feb 19 08:35:58 crc kubenswrapper[5023]: I0219 08:35:58.104045 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_ecf2c85d-9255-40bd-ac78-4165403c1754/setup-container/0.log" Feb 19 08:35:58 crc kubenswrapper[5023]: I0219 08:35:58.171229 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_ecf2c85d-9255-40bd-ac78-4165403c1754/rabbitmq/0.log" Feb 19 08:35:58 crc kubenswrapper[5023]: I0219 08:35:58.402473 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7cec7daa-e826-419c-9c77-cfcabc90b362/setup-container/0.log" Feb 19 08:35:58 crc kubenswrapper[5023]: I0219 08:35:58.587713 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7cec7daa-e826-419c-9c77-cfcabc90b362/setup-container/0.log" Feb 19 08:35:58 crc kubenswrapper[5023]: I0219 08:35:58.709663 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_7cec7daa-e826-419c-9c77-cfcabc90b362/rabbitmq/0.log" Feb 19 08:36:05 crc kubenswrapper[5023]: I0219 08:36:05.632857 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_29422048-b3f8-4f11-a4d8-e633cb5d12b8/memcached/0.log" Feb 19 08:36:11 crc kubenswrapper[5023]: I0219 08:36:11.870689 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:36:11 crc kubenswrapper[5023]: I0219 08:36:11.871143 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:36:17 crc kubenswrapper[5023]: I0219 08:36:17.498687 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/util/0.log" Feb 19 08:36:17 crc kubenswrapper[5023]: I0219 08:36:17.758161 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/pull/0.log" Feb 19 08:36:17 crc kubenswrapper[5023]: I0219 08:36:17.772199 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/util/0.log" Feb 19 08:36:17 crc kubenswrapper[5023]: I0219 08:36:17.857063 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/pull/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.181693 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/util/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.250459 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/extract/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.255097 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5r6bmc_f5bf7ea2-0fc2-4a1c-b33f-38f9396aa191/pull/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.427864 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/util/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.609678 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/util/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.617710 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/pull/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.621876 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/pull/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.780873 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/extract/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.783742 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/util/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.793598 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wlp27_dafb8755-d116-4ada-8f8a-4b16ed12b6a1/pull/0.log" Feb 19 08:36:18 crc kubenswrapper[5023]: I0219 08:36:18.953779 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/util/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.153823 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/util/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.159493 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/pull/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.167038 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/pull/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.345856 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/util/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.373543 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/pull/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.386145 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213mdssh_5e684cb3-b258-4828-9438-41f79a2a9bf7/extract/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.569888 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-utilities/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.832748 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-content/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.894545 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-content/0.log" Feb 19 08:36:19 crc kubenswrapper[5023]: I0219 08:36:19.935162 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-utilities/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.143687 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-utilities/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.175296 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/extract-content/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.387090 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-utilities/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.525819 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tkmgb_eba6e82c-2ec1-44e8-ab0e-0cf6f88d4627/registry-server/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.710892 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-content/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.728924 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-content/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.743440 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-utilities/0.log" Feb 19 08:36:20 crc kubenswrapper[5023]: I0219 08:36:20.916047 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-utilities/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.024542 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/extract-content/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.254684 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/util/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.335340 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pqk9z_e0d2964c-4c2f-4c86-bcf9-a5e574c18629/registry-server/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.422693 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/util/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.464585 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/pull/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.464987 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/pull/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.632359 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/util/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.638960 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/extract/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.644268 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecard96n_96b16c33-02d5-4371-91f6-e2d137b49df6/pull/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.686848 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2zqn9_6708d9d6-f225-4977-9446-8c2374e80e18/marketplace-operator/0.log" Feb 19 08:36:21 crc kubenswrapper[5023]: I0219 08:36:21.822087 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.023246 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.230994 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-content/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.278872 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-content/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.400900 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.405068 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/extract-content/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.526926 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.530014 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gf9zh_cb3df312-4ed1-4b2c-bfb0-52328b896bdc/registry-server/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.680171 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.680852 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-content/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.708574 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-content/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.963830 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-utilities/0.log" Feb 19 08:36:22 crc kubenswrapper[5023]: I0219 08:36:22.988294 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/extract-content/0.log" Feb 19 08:36:23 crc kubenswrapper[5023]: I0219 08:36:23.305183 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsgr7_ba7c1033-62a2-4d63-b198-075622e7f90c/registry-server/0.log" Feb 19 08:36:32 crc kubenswrapper[5023]: I0219 08:36:32.326039 5023 scope.go:117] "RemoveContainer" containerID="45fd0549fca8bf9e41a822bbd236f05ae1e65262832ca1386fcddc759c0725eb" Feb 19 08:36:32 crc kubenswrapper[5023]: I0219 08:36:32.344071 5023 scope.go:117] "RemoveContainer" containerID="d782967d86d08dab08a3b9f0e1f1b25fcb938b2ce84606b8a74e55d2fdc451ca" Feb 19 08:36:32 crc kubenswrapper[5023]: I0219 08:36:32.411062 5023 scope.go:117] "RemoveContainer" containerID="4057d9c6bca934b43b47bbc183ac001df4882a5da76cbbd1337715d1bc21620a" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.094084 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d9bf9854-9rvqk_4b26147b-3c73-4b0d-8810-38d893b67b6b/prometheus-operator-admission-webhook/0.log" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.116180 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d9bf9854-9dl9f_c5c5f372-8b6a-4454-bc6a-0dcda2907ec1/prometheus-operator-admission-webhook/0.log" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.129820 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qgn84_9ac16bf5-97d2-478b-a915-9f9919ecd59e/prometheus-operator/0.log" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.323761 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ztvtc_817dfdb3-899e-49c9-9a8b-73f8c3e80c52/observability-ui-dashboards/0.log" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.331210 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-vg2dl_49bbb335-22f1-432d-8508-9575cf6006ac/perses-operator/0.log" Feb 19 08:36:37 crc kubenswrapper[5023]: I0219 08:36:37.356256 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jghsx_abccc29c-4404-4fbf-abec-9046e05e6bc3/operator/0.log" Feb 19 08:36:41 crc kubenswrapper[5023]: I0219 08:36:41.869927 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:36:41 crc kubenswrapper[5023]: I0219 08:36:41.870462 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165083 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165839 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165850 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165865 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="extract-content" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165870 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="extract-content" Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165893 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="extract-content" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165901 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="extract-content" Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165911 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="extract-utilities" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165916 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="extract-utilities" Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165925 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165931 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: E0219 08:36:53.165946 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="extract-utilities" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.165953 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="extract-utilities" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.166081 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10e0801-688e-4095-8cb1-ae17cbc26115" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.166098 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e3501c-7d69-4357-8507-a819ecf777e3" containerName="registry-server" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.167334 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.171778 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.305597 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.305689 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772nb\" (UniqueName: \"kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.305814 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.407109 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.407220 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.407251 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772nb\" (UniqueName: \"kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.407698 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.407851 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.438137 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772nb\" (UniqueName: \"kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb\") pod \"community-operators-zbnk5\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:53 crc kubenswrapper[5023]: I0219 08:36:53.501461 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:36:54 crc kubenswrapper[5023]: I0219 08:36:54.029154 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:36:55 crc kubenswrapper[5023]: I0219 08:36:55.025495 5023 generic.go:334] "Generic (PLEG): container finished" podID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerID="f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89" exitCode=0 Feb 19 08:36:55 crc kubenswrapper[5023]: I0219 08:36:55.025548 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerDied","Data":"f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89"} Feb 19 08:36:55 crc kubenswrapper[5023]: I0219 08:36:55.025755 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerStarted","Data":"d032697893eaec730d7b62dea9c05661902c7f5fd9c0560fed9a700388fadb09"} Feb 19 08:36:55 crc kubenswrapper[5023]: I0219 08:36:55.027769 5023 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 08:36:56 crc kubenswrapper[5023]: I0219 08:36:56.035043 5023 generic.go:334] "Generic (PLEG): container finished" podID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerID="de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f" exitCode=0 Feb 19 08:36:56 crc kubenswrapper[5023]: I0219 08:36:56.035121 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerDied","Data":"de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f"} Feb 19 08:36:57 crc kubenswrapper[5023]: I0219 08:36:57.046166 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerStarted","Data":"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb"} Feb 19 08:37:03 crc kubenswrapper[5023]: I0219 08:37:03.502166 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:03 crc kubenswrapper[5023]: I0219 08:37:03.502706 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:03 crc kubenswrapper[5023]: I0219 08:37:03.544574 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:03 crc kubenswrapper[5023]: I0219 08:37:03.569576 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zbnk5" podStartSLOduration=9.190543769 podStartE2EDuration="10.569558958s" podCreationTimestamp="2026-02-19 08:36:53 +0000 UTC" firstStartedPulling="2026-02-19 08:36:55.027502772 +0000 UTC m=+2172.684621720" lastFinishedPulling="2026-02-19 08:36:56.406517961 +0000 UTC m=+2174.063636909" observedRunningTime="2026-02-19 08:36:57.076179038 +0000 UTC m=+2174.733297986" watchObservedRunningTime="2026-02-19 08:37:03.569558958 +0000 UTC m=+2181.226677906" Feb 19 08:37:04 crc kubenswrapper[5023]: I0219 08:37:04.137705 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.143562 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.144142 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zbnk5" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="registry-server" containerID="cri-o://b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb" gracePeriod=2 Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.642002 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.752445 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities\") pod \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.752676 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content\") pod \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.752718 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-772nb\" (UniqueName: \"kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb\") pod \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\" (UID: \"fd9806d9-d495-4104-ab79-ca3f13fbe54d\") " Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.753282 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities" (OuterVolumeSpecName: "utilities") pod "fd9806d9-d495-4104-ab79-ca3f13fbe54d" (UID: "fd9806d9-d495-4104-ab79-ca3f13fbe54d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.761454 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb" (OuterVolumeSpecName: "kube-api-access-772nb") pod "fd9806d9-d495-4104-ab79-ca3f13fbe54d" (UID: "fd9806d9-d495-4104-ab79-ca3f13fbe54d"). InnerVolumeSpecName "kube-api-access-772nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.801421 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd9806d9-d495-4104-ab79-ca3f13fbe54d" (UID: "fd9806d9-d495-4104-ab79-ca3f13fbe54d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.854245 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.854287 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9806d9-d495-4104-ab79-ca3f13fbe54d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:07 crc kubenswrapper[5023]: I0219 08:37:07.854303 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-772nb\" (UniqueName: \"kubernetes.io/projected/fd9806d9-d495-4104-ab79-ca3f13fbe54d-kube-api-access-772nb\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.126827 5023 generic.go:334] "Generic (PLEG): container finished" podID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerID="b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb" exitCode=0 Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.126870 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerDied","Data":"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb"} Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.127228 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbnk5" event={"ID":"fd9806d9-d495-4104-ab79-ca3f13fbe54d","Type":"ContainerDied","Data":"d032697893eaec730d7b62dea9c05661902c7f5fd9c0560fed9a700388fadb09"} Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.127290 5023 scope.go:117] "RemoveContainer" containerID="b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.126913 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbnk5" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.146485 5023 scope.go:117] "RemoveContainer" containerID="de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.176491 5023 scope.go:117] "RemoveContainer" containerID="f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.206015 5023 scope.go:117] "RemoveContainer" containerID="b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb" Feb 19 08:37:08 crc kubenswrapper[5023]: E0219 08:37:08.206798 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb\": container with ID starting with b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb not found: ID does not exist" containerID="b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.212059 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb"} err="failed to get container status \"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb\": rpc error: code = NotFound desc = could not find container \"b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb\": container with ID starting with b9eedba3cc94b3e3558bbf959ea68f660e72581892ccb97ca3bf9df84977d6cb not found: ID does not exist" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.212233 5023 scope.go:117] "RemoveContainer" containerID="de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f" Feb 19 08:37:08 crc kubenswrapper[5023]: E0219 08:37:08.213600 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f\": container with ID starting with de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f not found: ID does not exist" containerID="de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.213662 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f"} err="failed to get container status \"de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f\": rpc error: code = NotFound desc = could not find container \"de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f\": container with ID starting with de200143ce55101fb335f809eb82c7b6b0531193ec7a5cf8dcbc5622a1468b8f not found: ID does not exist" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.213689 5023 scope.go:117] "RemoveContainer" containerID="f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89" Feb 19 08:37:08 crc kubenswrapper[5023]: E0219 08:37:08.214362 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89\": container with ID starting with f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89 not found: ID does not exist" containerID="f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.214402 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89"} err="failed to get container status \"f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89\": rpc error: code = NotFound desc = could not find container \"f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89\": container with ID starting with f180b2c801ee547dde2c759538b2b847234a4e3000ffe988b1aaca14f62d7f89 not found: ID does not exist" Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.220266 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:37:08 crc kubenswrapper[5023]: I0219 08:37:08.226272 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zbnk5"] Feb 19 08:37:09 crc kubenswrapper[5023]: I0219 08:37:09.487100 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" path="/var/lib/kubelet/pods/fd9806d9-d495-4104-ab79-ca3f13fbe54d/volumes" Feb 19 08:37:11 crc kubenswrapper[5023]: I0219 08:37:11.870487 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:37:11 crc kubenswrapper[5023]: I0219 08:37:11.870875 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:37:11 crc kubenswrapper[5023]: I0219 08:37:11.870938 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:37:11 crc kubenswrapper[5023]: I0219 08:37:11.871654 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:37:11 crc kubenswrapper[5023]: I0219 08:37:11.871712 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b" gracePeriod=600 Feb 19 08:37:12 crc kubenswrapper[5023]: I0219 08:37:12.165761 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b" exitCode=0 Feb 19 08:37:12 crc kubenswrapper[5023]: I0219 08:37:12.165808 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b"} Feb 19 08:37:12 crc kubenswrapper[5023]: I0219 08:37:12.165994 5023 scope.go:117] "RemoveContainer" containerID="f824af99b487328ceae759a718ed19e26f6564fbc5441189673bb4f3498bf848" Feb 19 08:37:13 crc kubenswrapper[5023]: I0219 08:37:13.176893 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerStarted","Data":"963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545"} Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.550520 5023 scope.go:117] "RemoveContainer" containerID="651295a8ce271fe2fa3d268b992d1436967927f596688fd7393d7defe2023c16" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.571997 5023 scope.go:117] "RemoveContainer" containerID="f0db34e6d4a695f8c4a79f9cf7291ab9b5099efe62c91c5419e50811df6e4cff" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.602093 5023 scope.go:117] "RemoveContainer" containerID="63a64de1890df2bc32c4973d7e16bf6d37b570bae5d98bbaa9cc865c84f6a946" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.657125 5023 scope.go:117] "RemoveContainer" containerID="b6e658322e38bc7c0c7757b25cb1f6b1b44a3aaa0a0629bd0e4185a7c6603b30" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.674307 5023 scope.go:117] "RemoveContainer" containerID="921c3b2965e5260b2eb1ab96bd0d96cecd971fa7698627ba55bcaffb911cae71" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.697489 5023 scope.go:117] "RemoveContainer" containerID="1f9dcdf7e3ab927e4ea5acf1fd53948d16e533e72a489bf059502c7fc6896a4d" Feb 19 08:37:32 crc kubenswrapper[5023]: I0219 08:37:32.729980 5023 scope.go:117] "RemoveContainer" containerID="71a2e3823fcffbd8dde12c8deb6938d26fa0d5c1ebfb5093a30f62fe215cb461" Feb 19 08:37:47 crc kubenswrapper[5023]: I0219 08:37:47.452901 5023 generic.go:334] "Generic (PLEG): container finished" podID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerID="1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c" exitCode=0 Feb 19 08:37:47 crc kubenswrapper[5023]: I0219 08:37:47.453093 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" event={"ID":"0483dcab-4d27-47b1-b98a-26e9535c123e","Type":"ContainerDied","Data":"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c"} Feb 19 08:37:47 crc kubenswrapper[5023]: I0219 08:37:47.455519 5023 scope.go:117] "RemoveContainer" containerID="1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c" Feb 19 08:37:47 crc kubenswrapper[5023]: I0219 08:37:47.545767 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gwnc_must-gather-mf8lc_0483dcab-4d27-47b1-b98a-26e9535c123e/gather/0.log" Feb 19 08:37:54 crc kubenswrapper[5023]: I0219 08:37:54.796339 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7gwnc/must-gather-mf8lc"] Feb 19 08:37:54 crc kubenswrapper[5023]: I0219 08:37:54.799075 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="copy" containerID="cri-o://d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4" gracePeriod=2 Feb 19 08:37:54 crc kubenswrapper[5023]: I0219 08:37:54.817806 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7gwnc/must-gather-mf8lc"] Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.281579 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gwnc_must-gather-mf8lc_0483dcab-4d27-47b1-b98a-26e9535c123e/copy/0.log" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.282301 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.387006 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cfd6\" (UniqueName: \"kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6\") pod \"0483dcab-4d27-47b1-b98a-26e9535c123e\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.387075 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output\") pod \"0483dcab-4d27-47b1-b98a-26e9535c123e\" (UID: \"0483dcab-4d27-47b1-b98a-26e9535c123e\") " Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.394282 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6" (OuterVolumeSpecName: "kube-api-access-4cfd6") pod "0483dcab-4d27-47b1-b98a-26e9535c123e" (UID: "0483dcab-4d27-47b1-b98a-26e9535c123e"). InnerVolumeSpecName "kube-api-access-4cfd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.488533 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cfd6\" (UniqueName: \"kubernetes.io/projected/0483dcab-4d27-47b1-b98a-26e9535c123e-kube-api-access-4cfd6\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.514206 5023 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gwnc_must-gather-mf8lc_0483dcab-4d27-47b1-b98a-26e9535c123e/copy/0.log" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.514677 5023 generic.go:334] "Generic (PLEG): container finished" podID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerID="d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4" exitCode=143 Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.514733 5023 scope.go:117] "RemoveContainer" containerID="d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.514771 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gwnc/must-gather-mf8lc" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.519990 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0483dcab-4d27-47b1-b98a-26e9535c123e" (UID: "0483dcab-4d27-47b1-b98a-26e9535c123e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.541016 5023 scope.go:117] "RemoveContainer" containerID="1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.589668 5023 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0483dcab-4d27-47b1-b98a-26e9535c123e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.628078 5023 scope.go:117] "RemoveContainer" containerID="d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4" Feb 19 08:37:55 crc kubenswrapper[5023]: E0219 08:37:55.628831 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4\": container with ID starting with d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4 not found: ID does not exist" containerID="d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.628861 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4"} err="failed to get container status \"d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4\": rpc error: code = NotFound desc = could not find container \"d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4\": container with ID starting with d5a7c16bed86e76e24ea73b7a34fea947d353fb1b8f42055472c15e7b46cacd4 not found: ID does not exist" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.628879 5023 scope.go:117] "RemoveContainer" containerID="1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c" Feb 19 08:37:55 crc kubenswrapper[5023]: E0219 08:37:55.629289 5023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c\": container with ID starting with 1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c not found: ID does not exist" containerID="1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c" Feb 19 08:37:55 crc kubenswrapper[5023]: I0219 08:37:55.629315 5023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c"} err="failed to get container status \"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c\": rpc error: code = NotFound desc = could not find container \"1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c\": container with ID starting with 1e6d97c6909f9a59dd79a2ec18f27a2ceb9d941e912ddde55e9fd87e83be455c not found: ID does not exist" Feb 19 08:37:57 crc kubenswrapper[5023]: I0219 08:37:57.492191 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" path="/var/lib/kubelet/pods/0483dcab-4d27-47b1-b98a-26e9535c123e/volumes" Feb 19 08:38:32 crc kubenswrapper[5023]: I0219 08:38:32.901544 5023 scope.go:117] "RemoveContainer" containerID="eb8f1646d477f74d004d97e784640997a54053812da95c8d1b37c18e32d8b618" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.953653 5023 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:38:56 crc kubenswrapper[5023]: E0219 08:38:56.954718 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="gather" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.954737 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="gather" Feb 19 08:38:56 crc kubenswrapper[5023]: E0219 08:38:56.954750 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="copy" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.954757 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="copy" Feb 19 08:38:56 crc kubenswrapper[5023]: E0219 08:38:56.954772 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="extract-utilities" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.954782 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="extract-utilities" Feb 19 08:38:56 crc kubenswrapper[5023]: E0219 08:38:56.954822 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="extract-content" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.954829 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="extract-content" Feb 19 08:38:56 crc kubenswrapper[5023]: E0219 08:38:56.954844 5023 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="registry-server" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.954850 5023 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="registry-server" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.955067 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="copy" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.955083 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="0483dcab-4d27-47b1-b98a-26e9535c123e" containerName="gather" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.955097 5023 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9806d9-d495-4104-ab79-ca3f13fbe54d" containerName="registry-server" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.956537 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:56 crc kubenswrapper[5023]: I0219 08:38:56.981066 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.120557 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd77\" (UniqueName: \"kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.120922 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.121051 5023 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.222548 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.223061 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd77\" (UniqueName: \"kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.223210 5023 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.223536 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.223594 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.245374 5023 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd77\" (UniqueName: \"kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77\") pod \"certified-operators-9hr4s\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.326325 5023 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:38:57 crc kubenswrapper[5023]: I0219 08:38:57.813303 5023 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:38:58 crc kubenswrapper[5023]: I0219 08:38:58.022869 5023 generic.go:334] "Generic (PLEG): container finished" podID="c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" containerID="fa12a446333ff4fe6b070e7bcd1aadd1399941f8ffeb31d98da9c8a07b401d85" exitCode=0 Feb 19 08:38:58 crc kubenswrapper[5023]: I0219 08:38:58.022927 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerDied","Data":"fa12a446333ff4fe6b070e7bcd1aadd1399941f8ffeb31d98da9c8a07b401d85"} Feb 19 08:38:58 crc kubenswrapper[5023]: I0219 08:38:58.023126 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerStarted","Data":"126d9f9d9d721ef42d1d14a7bcf0954e95e30f443b836a69ee69593bb4e5e6d0"} Feb 19 08:38:59 crc kubenswrapper[5023]: I0219 08:38:59.031552 5023 generic.go:334] "Generic (PLEG): container finished" podID="c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" containerID="3b224b3e4eb33c5647c9253422ef5de6788a4f7dc262386eccdab7a2db523bbe" exitCode=0 Feb 19 08:38:59 crc kubenswrapper[5023]: I0219 08:38:59.031629 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerDied","Data":"3b224b3e4eb33c5647c9253422ef5de6788a4f7dc262386eccdab7a2db523bbe"} Feb 19 08:39:00 crc kubenswrapper[5023]: I0219 08:39:00.041303 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerStarted","Data":"ceeb74661e7b8560fd41a5e2d9a8416848031e5aaae1936e9a5b89121874878c"} Feb 19 08:39:00 crc kubenswrapper[5023]: I0219 08:39:00.085091 5023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9hr4s" podStartSLOduration=2.719821671 podStartE2EDuration="4.085074827s" podCreationTimestamp="2026-02-19 08:38:56 +0000 UTC" firstStartedPulling="2026-02-19 08:38:58.024218888 +0000 UTC m=+2295.681337826" lastFinishedPulling="2026-02-19 08:38:59.389472034 +0000 UTC m=+2297.046590982" observedRunningTime="2026-02-19 08:39:00.078305977 +0000 UTC m=+2297.735424925" watchObservedRunningTime="2026-02-19 08:39:00.085074827 +0000 UTC m=+2297.742193775" Feb 19 08:39:07 crc kubenswrapper[5023]: I0219 08:39:07.327180 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:07 crc kubenswrapper[5023]: I0219 08:39:07.327796 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:07 crc kubenswrapper[5023]: I0219 08:39:07.373113 5023 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:08 crc kubenswrapper[5023]: I0219 08:39:08.148638 5023 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:10 crc kubenswrapper[5023]: I0219 08:39:10.946000 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:39:10 crc kubenswrapper[5023]: I0219 08:39:10.946804 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9hr4s" podUID="c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" containerName="registry-server" containerID="cri-o://ceeb74661e7b8560fd41a5e2d9a8416848031e5aaae1936e9a5b89121874878c" gracePeriod=2 Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.153102 5023 generic.go:334] "Generic (PLEG): container finished" podID="c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" containerID="ceeb74661e7b8560fd41a5e2d9a8416848031e5aaae1936e9a5b89121874878c" exitCode=0 Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.153164 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerDied","Data":"ceeb74661e7b8560fd41a5e2d9a8416848031e5aaae1936e9a5b89121874878c"} Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.432599 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.554911 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content\") pod \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.555404 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbd77\" (UniqueName: \"kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77\") pod \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.555576 5023 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities\") pod \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\" (UID: \"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9\") " Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.557505 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities" (OuterVolumeSpecName: "utilities") pod "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" (UID: "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.567368 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77" (OuterVolumeSpecName: "kube-api-access-cbd77") pod "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" (UID: "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9"). InnerVolumeSpecName "kube-api-access-cbd77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.605453 5023 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" (UID: "c8089c8b-26ba-4e4e-a24d-e032a0a19bb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.658164 5023 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.658199 5023 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbd77\" (UniqueName: \"kubernetes.io/projected/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-kube-api-access-cbd77\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:11 crc kubenswrapper[5023]: I0219 08:39:11.658210 5023 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.162753 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hr4s" event={"ID":"c8089c8b-26ba-4e4e-a24d-e032a0a19bb9","Type":"ContainerDied","Data":"126d9f9d9d721ef42d1d14a7bcf0954e95e30f443b836a69ee69593bb4e5e6d0"} Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.162804 5023 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hr4s" Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.162817 5023 scope.go:117] "RemoveContainer" containerID="ceeb74661e7b8560fd41a5e2d9a8416848031e5aaae1936e9a5b89121874878c" Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.196413 5023 scope.go:117] "RemoveContainer" containerID="3b224b3e4eb33c5647c9253422ef5de6788a4f7dc262386eccdab7a2db523bbe" Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.197516 5023 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.205234 5023 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9hr4s"] Feb 19 08:39:12 crc kubenswrapper[5023]: I0219 08:39:12.237384 5023 scope.go:117] "RemoveContainer" containerID="fa12a446333ff4fe6b070e7bcd1aadd1399941f8ffeb31d98da9c8a07b401d85" Feb 19 08:39:13 crc kubenswrapper[5023]: I0219 08:39:13.486280 5023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8089c8b-26ba-4e4e-a24d-e032a0a19bb9" path="/var/lib/kubelet/pods/c8089c8b-26ba-4e4e-a24d-e032a0a19bb9/volumes" Feb 19 08:39:41 crc kubenswrapper[5023]: I0219 08:39:41.870001 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:39:41 crc kubenswrapper[5023]: I0219 08:39:41.871378 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:40:11 crc kubenswrapper[5023]: I0219 08:40:11.869888 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:40:11 crc kubenswrapper[5023]: I0219 08:40:11.870406 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:40:41 crc kubenswrapper[5023]: I0219 08:40:41.870091 5023 patch_prober.go:28] interesting pod/machine-config-daemon-444kx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 08:40:41 crc kubenswrapper[5023]: I0219 08:40:41.870599 5023 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 08:40:41 crc kubenswrapper[5023]: I0219 08:40:41.870696 5023 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-444kx" Feb 19 08:40:41 crc kubenswrapper[5023]: I0219 08:40:41.871359 5023 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545"} pod="openshift-machine-config-operator/machine-config-daemon-444kx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 08:40:41 crc kubenswrapper[5023]: I0219 08:40:41.871413 5023 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerName="machine-config-daemon" containerID="cri-o://963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545" gracePeriod=600 Feb 19 08:40:42 crc kubenswrapper[5023]: E0219 08:40:42.005955 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:40:42 crc kubenswrapper[5023]: I0219 08:40:42.848648 5023 generic.go:334] "Generic (PLEG): container finished" podID="b3e4d325-7b2d-4177-b955-cc85093996a1" containerID="963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545" exitCode=0 Feb 19 08:40:42 crc kubenswrapper[5023]: I0219 08:40:42.848657 5023 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-444kx" event={"ID":"b3e4d325-7b2d-4177-b955-cc85093996a1","Type":"ContainerDied","Data":"963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545"} Feb 19 08:40:42 crc kubenswrapper[5023]: I0219 08:40:42.848723 5023 scope.go:117] "RemoveContainer" containerID="c5cce94256b07d6b6ecdf98c263895426fc5e174d39523cbcdc0c88f3b6e0a4b" Feb 19 08:40:42 crc kubenswrapper[5023]: I0219 08:40:42.849389 5023 scope.go:117] "RemoveContainer" containerID="963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545" Feb 19 08:40:42 crc kubenswrapper[5023]: E0219 08:40:42.849772 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1" Feb 19 08:40:47 crc kubenswrapper[5023]: E0219 08:40:47.375811 5023 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.9s" Feb 19 08:40:58 crc kubenswrapper[5023]: I0219 08:40:58.477407 5023 scope.go:117] "RemoveContainer" containerID="963e670fe40cae311d699273b4c258b9948d56d2b0ff72f5f9b41aa0dd39c545" Feb 19 08:40:58 crc kubenswrapper[5023]: E0219 08:40:58.478189 5023 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-444kx_openshift-machine-config-operator(b3e4d325-7b2d-4177-b955-cc85093996a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-444kx" podUID="b3e4d325-7b2d-4177-b955-cc85093996a1"